Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ tags:
|
|
17 |
|
18 |
DeepLabV3 Quantized is designed for semantic segmentation at multiple scales, trained on various datasets. It uses MobileNet as a backbone.
|
19 |
|
20 |
-
This model is an implementation of DeepLabV3-Plus-MobileNet-Quantized found [here](
|
21 |
This repository provides scripts to run DeepLabV3-Plus-MobileNet-Quantized on Qualcomm® devices.
|
22 |
More details on model performance across various devices, can be found
|
23 |
[here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized).
|
@@ -33,15 +33,35 @@ More details on model performance across various devices, can be found
|
|
33 |
- Model size: 6.04 MB
|
34 |
- Number of output classes: 21
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
|
38 |
|
39 |
-
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
40 |
-
| ---|---|---|---|---|---|---|---|
|
41 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 3.353 ms | 0 - 8 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite)
|
42 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 5.163 ms | 0 - 14 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.so)
|
43 |
-
|
44 |
-
|
45 |
|
46 |
## Installation
|
47 |
|
@@ -97,16 +117,16 @@ device. This script does the following:
|
|
97 |
```bash
|
98 |
python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.export
|
99 |
```
|
100 |
-
|
101 |
```
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
|
|
110 |
```
|
111 |
|
112 |
|
@@ -145,15 +165,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
|
|
145 |
Get more details on DeepLabV3-Plus-MobileNet-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized).
|
146 |
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
147 |
|
|
|
148 |
## License
|
149 |
-
|
150 |
-
|
151 |
-
|
|
|
152 |
|
153 |
## References
|
154 |
* [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
|
155 |
* [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception)
|
156 |
|
|
|
|
|
157 |
## Community
|
158 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
159 |
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
|
|
17 |
|
18 |
DeepLabV3 Quantized is designed for semantic segmentation at multiple scales, trained on various datasets. It uses MobileNet as a backbone.
|
19 |
|
20 |
+
This model is an implementation of DeepLabV3-Plus-MobileNet-Quantized found [here]({source_repo}).
|
21 |
This repository provides scripts to run DeepLabV3-Plus-MobileNet-Quantized on Qualcomm® devices.
|
22 |
More details on model performance across various devices, can be found
|
23 |
[here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized).
|
|
|
33 |
- Model size: 6.04 MB
|
34 |
- Number of output classes: 21
|
35 |
|
36 |
+
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
37 |
+
|---|---|---|---|---|---|---|---|---|
|
38 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 3.304 ms | 0 - 146 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
39 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 5.214 ms | 0 - 12 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.so) |
|
40 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 4.221 ms | 11 - 18 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.onnx) |
|
41 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 2.825 ms | 0 - 65 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
42 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.844 ms | 1 - 25 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.so) |
|
43 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 3.141 ms | 0 - 72 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.onnx) |
|
44 |
+
| DeepLabV3-Plus-MobileNet-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | TFLITE | 14.162 ms | 5 - 48 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
45 |
+
| DeepLabV3-Plus-MobileNet-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 18.291 ms | 1 - 9 MB | INT8 | NPU | Use Export Script |
|
46 |
+
| DeepLabV3-Plus-MobileNet-Quantized | RB5 (Proxy) | QCS8250 Proxy | TFLITE | 127.38 ms | 11 - 63 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
47 |
+
| DeepLabV3-Plus-MobileNet-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 3.315 ms | 0 - 8 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
48 |
+
| DeepLabV3-Plus-MobileNet-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 3.963 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
|
49 |
+
| DeepLabV3-Plus-MobileNet-Quantized | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 3.335 ms | 0 - 4 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
50 |
+
| DeepLabV3-Plus-MobileNet-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 3.97 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
|
51 |
+
| DeepLabV3-Plus-MobileNet-Quantized | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 3.294 ms | 0 - 9 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
52 |
+
| DeepLabV3-Plus-MobileNet-Quantized | SA8775 (Proxy) | SA8775P Proxy | QNN | 3.994 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
|
53 |
+
| DeepLabV3-Plus-MobileNet-Quantized | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 3.328 ms | 0 - 115 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
54 |
+
| DeepLabV3-Plus-MobileNet-Quantized | SA8650 (Proxy) | SA8650P Proxy | QNN | 3.963 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
|
55 |
+
| DeepLabV3-Plus-MobileNet-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 4.166 ms | 5 - 71 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
56 |
+
| DeepLabV3-Plus-MobileNet-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 5.51 ms | 1 - 32 MB | INT8 | NPU | Use Export Script |
|
57 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 2.441 ms | 0 - 42 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) |
|
58 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.816 ms | 1 - 25 MB | INT8 | NPU | Use Export Script |
|
59 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 2.494 ms | 0 - 47 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.onnx) |
|
60 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 4.324 ms | 1 - 1 MB | INT8 | NPU | Use Export Script |
|
61 |
+
| DeepLabV3-Plus-MobileNet-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 4.68 ms | 17 - 17 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.onnx) |
|
62 |
|
63 |
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
## Installation
|
67 |
|
|
|
117 |
```bash
|
118 |
python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.export
|
119 |
```
|
|
|
120 |
```
|
121 |
+
Profiling Results
|
122 |
+
------------------------------------------------------------
|
123 |
+
DeepLabV3-Plus-MobileNet-Quantized
|
124 |
+
Device : Samsung Galaxy S23 (13)
|
125 |
+
Runtime : TFLITE
|
126 |
+
Estimated inference time (ms) : 3.3
|
127 |
+
Estimated peak memory usage (MB): [0, 146]
|
128 |
+
Total # Ops : 104
|
129 |
+
Compute Unit(s) : NPU (104 ops)
|
130 |
```
|
131 |
|
132 |
|
|
|
165 |
Get more details on DeepLabV3-Plus-MobileNet-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized).
|
166 |
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
167 |
|
168 |
+
|
169 |
## License
|
170 |
+
* The license for the original implementation of DeepLabV3-Plus-MobileNet-Quantized can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
|
171 |
+
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
|
172 |
+
|
173 |
+
|
174 |
|
175 |
## References
|
176 |
* [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
|
177 |
* [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception)
|
178 |
|
179 |
+
|
180 |
+
|
181 |
## Community
|
182 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
183 |
* For questions or feedback please [reach out to us](mailto:[email protected]).
|