datasets:
- VOC2012
library_name: pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- quantized
- android
DeepLabV3-Plus-MobileNet-Quantized: Optimized for Mobile Deployment
Quantized Deep Convolutional Neural Network model for semantic segmentation
DeepLabV3 Quantized is designed for semantic segmentation at multiple scales, trained on various datasets. It uses MobileNet as a backbone.
This model is an implementation of DeepLabV3-Plus-MobileNet-Quantized found here. This repository provides scripts to run DeepLabV3-Plus-MobileNet-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Semantic segmentation
- Model Stats:
- Model checkpoint: VOC2012
- Input resolution: 513x513
- Number of parameters: 5.80M
- Model size: 6.04 MB
- Number of output classes: 21
Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 3.304 ms | 0 - 146 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 5.214 ms | 0 - 12 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.so |
DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 4.221 ms | 11 - 18 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.onnx |
DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 2.825 ms | 0 - 65 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.844 ms | 1 - 25 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.so |
DeepLabV3-Plus-MobileNet-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 3.141 ms | 0 - 72 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.onnx |
DeepLabV3-Plus-MobileNet-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | TFLITE | 14.162 ms | 5 - 48 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 18.291 ms | 1 - 9 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | RB5 (Proxy) | QCS8250 Proxy | TFLITE | 127.38 ms | 11 - 63 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 3.315 ms | 0 - 8 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 3.963 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 3.335 ms | 0 - 4 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 3.97 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 3.294 ms | 0 - 9 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | SA8775 (Proxy) | SA8775P Proxy | QNN | 3.994 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 3.328 ms | 0 - 115 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | SA8650 (Proxy) | SA8650P Proxy | QNN | 3.963 ms | 1 - 2 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 4.166 ms | 5 - 71 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 5.51 ms | 1 - 32 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 2.441 ms | 0 - 42 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.tflite |
DeepLabV3-Plus-MobileNet-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.816 ms | 1 - 25 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 2.494 ms | 0 - 47 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.onnx |
DeepLabV3-Plus-MobileNet-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 4.324 ms | 1 - 1 MB | INT8 | NPU | Use Export Script |
DeepLabV3-Plus-MobileNet-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 4.68 ms | 17 - 17 MB | INT8 | NPU | DeepLabV3-Plus-MobileNet-Quantized.onnx |
Installation
This model can be installed as a Python package via pip.
pip install "qai-hub-models[deeplabv3_plus_mobilenet_quantized]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.export
Profiling Results
------------------------------------------------------------
DeepLabV3-Plus-MobileNet-Quantized
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 3.3
Estimated peak memory usage (MB): [0, 146]
Total # Ops : 104
Compute Unit(s) : NPU (104 ops)
Run demo on a cloud-hosted device
You can also run the demo on-device.
python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo --on-device
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo -- --on-device
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
export ): This sample app provides instructions on how to use the.so
shared library in an Android application.
View on Qualcomm® AI Hub
Get more details on DeepLabV3-Plus-MobileNet-Quantized's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of DeepLabV3-Plus-MobileNet-Quantized can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.