MobileNet-v2-Quantized: Optimized for Mobile Deployment

Imagenet classifier and general purpose backbone

MobileNetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

This model is an implementation of MobileNet-v2-Quantized found here.

This repository provides scripts to run MobileNet-v2-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Image classification
  • Model Stats:
    • Model checkpoint: Imagenet
    • Input resolution: 224x224
    • Number of parameters: 3.49M
    • Model size: 3.42 MB
Model Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Precision Primary Compute Unit Target Model
MobileNet-v2-Quantized Samsung Galaxy S23 Snapdragon® 8 Gen 2 TFLITE 0.451 ms 0 - 26 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized Samsung Galaxy S23 Snapdragon® 8 Gen 2 QNN 0.517 ms 0 - 3 MB INT8 NPU MobileNet-v2-Quantized.so
MobileNet-v2-Quantized Samsung Galaxy S23 Snapdragon® 8 Gen 2 ONNX 55.292 ms 10 - 72 MB INT8 NPU MobileNet-v2-Quantized.onnx
MobileNet-v2-Quantized Samsung Galaxy S24 Snapdragon® 8 Gen 3 TFLITE 0.316 ms 0 - 34 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized Samsung Galaxy S24 Snapdragon® 8 Gen 3 QNN 0.36 ms 0 - 19 MB INT8 NPU MobileNet-v2-Quantized.so
MobileNet-v2-Quantized Samsung Galaxy S24 Snapdragon® 8 Gen 3 ONNX 43.894 ms 1 - 324 MB INT8 NPU MobileNet-v2-Quantized.onnx
MobileNet-v2-Quantized Snapdragon 8 Elite QRD Snapdragon® 8 Elite TFLITE 0.296 ms 0 - 23 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 0.365 ms 0 - 27 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized SA7255P ADP SA7255P TFLITE 1.847 ms 0 - 16 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized SA7255P ADP SA7255P QNN 2.168 ms 0 - 10 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized SA8255 (Proxy) SA8255P Proxy TFLITE 0.447 ms 0 - 26 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized SA8255 (Proxy) SA8255P Proxy QNN 0.5 ms 0 - 3 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized SA8295P ADP SA8295P TFLITE 0.834 ms 0 - 22 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized SA8295P ADP SA8295P QNN 0.891 ms 0 - 14 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized SA8650 (Proxy) SA8650P Proxy TFLITE 0.453 ms 0 - 26 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized SA8650 (Proxy) SA8650P Proxy QNN 0.516 ms 0 - 2 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized SA8775P ADP SA8775P TFLITE 0.773 ms 0 - 17 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized SA8775P ADP SA8775P QNN 0.834 ms 0 - 10 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized RB3 Gen 2 (Proxy) QCS6490 Proxy TFLITE 1.111 ms 0 - 21 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized RB3 Gen 2 (Proxy) QCS6490 Proxy QNN 1.324 ms 0 - 11 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized RB5 (Proxy) QCS8250 Proxy TFLITE 13.735 ms 0 - 7 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized QCS8275 (Proxy) QCS8275 Proxy TFLITE 1.847 ms 0 - 16 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized QCS8275 (Proxy) QCS8275 Proxy QNN 2.168 ms 0 - 10 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized QCS8550 (Proxy) QCS8550 Proxy TFLITE 0.448 ms 0 - 27 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized QCS8550 (Proxy) QCS8550 Proxy QNN 0.512 ms 0 - 4 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized QCS9075 (Proxy) QCS9075 Proxy TFLITE 0.773 ms 0 - 17 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized QCS9075 (Proxy) QCS9075 Proxy QNN 0.834 ms 0 - 10 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized QCS8450 (Proxy) QCS8450 Proxy TFLITE 0.495 ms 0 - 23 MB INT8 NPU MobileNet-v2-Quantized.tflite
MobileNet-v2-Quantized QCS8450 (Proxy) QCS8450 Proxy QNN 0.607 ms 0 - 24 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized Snapdragon X Elite CRD Snapdragon® X Elite QNN 0.629 ms 0 - 0 MB INT8 NPU Use Export Script
MobileNet-v2-Quantized Snapdragon X Elite CRD Snapdragon® X Elite ONNX 54.007 ms 28 - 28 MB INT8 NPU MobileNet-v2-Quantized.onnx

Installation

Install the package via pip:

pip install qai-hub-models

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.mobilenet_v2_quantized.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.mobilenet_v2_quantized.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.mobilenet_v2_quantized.export
Profiling Results
------------------------------------------------------------
MobileNet-v2-Quantized
Device                          : Samsung Galaxy S23 (13)
Runtime                         : TFLITE                 
Estimated inference time (ms)   : 0.5                    
Estimated peak memory usage (MB): [0, 26]                
Total # Ops                     : 108                    
Compute Unit(s)                 : NPU (108 ops)          

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.mobilenet_v2_quantized import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S24")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.

Run demo on a cloud-hosted device

You can also run the demo on-device.

python -m qai_hub_models.models.mobilenet_v2_quantized.demo --on-device

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.mobilenet_v2_quantized.demo -- --on-device

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on MobileNet-v2-Quantized's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of MobileNet-v2-Quantized can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support image-classification models for pytorch library.