Whisper-Base-En: Optimized for Mobile Deployment
Automatic speech recognition (ASR) model for English transcription as well as translation
OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.
This model is an implementation of Whisper-Base-En found here.
This repository provides scripts to run Whisper-Base-En on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Speech recognition
- Model Stats:
- Model checkpoint: base.en
- Input resolution: 80x3000 (30 seconds audio)
- Mean decoded sequence length: 112 tokens
- Number of parameters (WhisperEncoder): 23.7M
- Model size (WhisperEncoder): 90.6 MB
- Number of parameters (WhisperDecoder): 48.6M
- Model size (WhisperDecoder): 186 MB
Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
WhisperEncoderInf | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 207.168 ms | 0 - 67 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 270.321 ms | 0 - 355 MB | FP16 | NPU | Whisper-Base-En.so |
WhisperEncoderInf | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 258.293 ms | 53 - 564 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperEncoderInf | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 159.338 ms | 39 - 85 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 225.666 ms | 0 - 1377 MB | FP16 | NPU | Whisper-Base-En.so |
WhisperEncoderInf | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 165.997 ms | 163 - 1665 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperEncoderInf | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 129.548 ms | 39 - 67 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 159.839 ms | 41 - 1544 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperEncoderInf | SA7255P ADP | SA7255P | TFLITE | 1153.482 ms | 37 - 60 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | SA7255P ADP | SA7255P | QNN | 1010.596 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 204.974 ms | 0 - 67 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | SA8255 (Proxy) | SA8255P Proxy | QNN | 251.25 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | SA8295P ADP | SA8295P | TFLITE | 205.071 ms | 38 - 69 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | SA8295P ADP | SA8295P | QNN | 220.65 ms | 1 - 17 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 233.593 ms | 0 - 77 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | SA8650 (Proxy) | SA8650P Proxy | QNN | 221.139 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | SA8775P ADP | SA8775P | TFLITE | 367.2 ms | 38 - 62 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | SA8775P ADP | SA8775P | QNN | 215.878 ms | 0 - 9 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 1153.482 ms | 37 - 60 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 1010.596 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 201.097 ms | 0 - 66 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 242.356 ms | 1 - 4 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 367.2 ms | 38 - 62 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 215.878 ms | 0 - 9 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 264.328 ms | 38 - 89 MB | FP16 | GPU | Whisper-Base-En.tflite |
WhisperEncoderInf | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 383.819 ms | 1 - 1441 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 174.12 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
WhisperEncoderInf | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 202.131 ms | 133 - 133 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperDecoderInf | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 9.835 ms | 5 - 31 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 5.8 ms | 20 - 39 MB | FP16 | NPU | Whisper-Base-En.so |
WhisperDecoderInf | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 10.337 ms | 11 - 446 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperDecoderInf | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 7.583 ms | 5 - 123 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 4.397 ms | 186 - 258 MB | FP16 | NPU | Whisper-Base-En.so |
WhisperDecoderInf | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 8.155 ms | 50 - 178 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperDecoderInf | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 7.295 ms | 5 - 114 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.486 ms | 18 - 84 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 7.88 ms | 50 - 155 MB | FP16 | NPU | Whisper-Base-En.onnx |
WhisperDecoderInf | SA7255P ADP | SA7255P | TFLITE | 36.508 ms | 5 - 111 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | SA7255P ADP | SA7255P | QNN | 26.501 ms | 18 - 27 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 9.908 ms | 5 - 31 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | SA8255 (Proxy) | SA8255P Proxy | QNN | 4.359 ms | 20 - 22 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | SA8295P ADP | SA8295P | TFLITE | 12.313 ms | 5 - 104 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | SA8295P ADP | SA8295P | QNN | 5.765 ms | 18 - 35 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 10.0 ms | 5 - 28 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | SA8650 (Proxy) | SA8650P Proxy | QNN | 4.213 ms | 20 - 23 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | SA8775P ADP | SA8775P | TFLITE | 12.335 ms | 4 - 109 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | SA8775P ADP | SA8775P | QNN | 5.42 ms | 18 - 27 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 36.508 ms | 5 - 111 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 26.501 ms | 18 - 27 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 9.84 ms | 6 - 31 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 4.237 ms | 20 - 24 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 12.335 ms | 4 - 109 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 5.42 ms | 18 - 27 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 12.029 ms | 5 - 116 MB | FP16 | NPU | Whisper-Base-En.tflite |
WhisperDecoderInf | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 6.666 ms | 20 - 91 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 3.797 ms | 20 - 20 MB | FP16 | NPU | Use Export Script |
WhisperDecoderInf | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 9.212 ms | 106 - 106 MB | FP16 | NPU | Whisper-Base-En.onnx |
Installation
Install the package via pip:
pip install "qai-hub-models[whisper-base-en]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.whisper_base_en.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.whisper_base_en.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.whisper_base_en.export
Profiling Results
------------------------------------------------------------
WhisperEncoderInf
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 207.2
Estimated peak memory usage (MB): [0, 67]
Total # Ops : 419
Compute Unit(s) : GPU (408 ops) CPU (11 ops)
------------------------------------------------------------
WhisperDecoderInf
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 9.8
Estimated peak memory usage (MB): [5, 31]
Total # Ops : 983
Compute Unit(s) : NPU (983 ops)
How does this work?
This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:
Step 1: Compile model for on-device deployment
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the jit.trace
and then call the submit_compile_job
API.
import torch
import qai_hub as hub
from qai_hub_models.models.whisper_base_en import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
Step 2: Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
Step 3: Verify on-device accuracy
To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.
Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
export ): This sample app provides instructions on how to use the.so
shared library in an Android application.
View on Qualcomm® AI Hub
Get more details on Whisper-Base-En's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of Whisper-Base-En can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.