YOLOv5 / README.md
qc903113684's picture
Update README.md
fc7a6d1 verified
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c1fef5b9d81735a12c3fcc/sebNQgVO1hUapijWvVwTl.jpeg" width=600>
# YOLOv5: Target Detection
Yolov5 is a one-stage structure target detection network framework, in which the main structure consists of 4 parts, including the network backbone composed of modified CSPNet, the high-resolution feature fusion module composed of FPN (Feature Paramid Network), composed of SPP (Spatial Pyramid Pooling) constitutes a pooling module, and three different detection heads are used to detect targets of different sizes.
The YOLOv5 model can be found [here](https://github.com/ultralytics/yolov5)
## CONTENTS
- [Source Model](#source-model)
- [Performance](#performance)
- [Model Conversion](#model-conversion)
- [Tutorial](#tutorial)
## Source Model
The steps followed the [yolov5 tutorials](https://docs.ultralytics.com/yolov5/tutorials/model_export/) to get the source model in ONNX format.
> The source model **YOLOv5s.onnx** also can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx).
**Environment Preparation**
```bash
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
```
**Export to ONNX**
```bash
python export.py --weights yolov5s.pt --include torchscript onnx --opset 12
```
## Performance
<center><b>🧰QCS6490</b></center>
|Device|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|6.7|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int8_qnn/cutoff_yolov5s_int8.qnn.serialized.bin)|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|15.2|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int16_qnn/cutoff_yolov5s_int16.qnn.serialized.bin)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|5.5|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int8_htp_snpe2/cutoff_yolov5s_int8_htp_snpe2.dlc)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|13.4|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int16_htp_snpe2/cutoff_yolov5s_int16_htp_snpe2.dlc)|
<center><b>🧰QCS8550</b></center>
|Device|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|4.1|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int8_qnn/cutoff_yolov5s_640_int8.qnn.serialized.bin)|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|13.4|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int16_qnn/cutoff_yolov5s_640_int16.qnn.serialized.bin)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|2.3|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int8_htp_snpe2/cutoff_yolov5s_int8_htp_snpe2.dlc)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|5.8|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int16_htp_snpe2/cutoff_yolov5s_int16_htp_snpe2.dlc)|
## Model Conversion
Demo models converted from [**AIMO(AI Model Optimizier)**](https://aidlux.com/en/product/aimo).
The demo model conversion step on AIMO can be found blow:
<center><b>🧰QCS6490</b></center>
|Device|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_qnn_int8.png)|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_qnn_int16.png)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_snpe_int8.png)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_snpe_int16.png)|
<center><b>🧰QCS8550</b></center>
|Device|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_qnn_int8.png)|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_qnn_int16.png)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_snpe_int8.png)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_snpe_int16.png)|
## Tutorial
### Step1: convert model
1.1 Prepare source model in onnx format. The source model can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx) or following [Source Model](#source-model) to obtain.
1.2 Login [AIMO](https://aidlux.com/en/product/aimo) and convert source model to target format. The model conversion step can follow **AIMO Conversion Step** in [Model Conversion Sheet](#model-conversion).
1.3 After conversion task done, download target model file.
> note: you can skip convert model step, and directly download converted model in [Performance Sheet](#performance).
### Step2: install AidLite SDK
```bash
# install aidlite sdk c++ api
sudo aid-pkg -i aidlite-sdk
# install aidlite sdk python api
python3 -m pip install pyaidlite -i https://mirrors.aidlux.com --trusted-host mirrors.aidlux.com
```
The developer document of AidLite SDK can be found [here](https://huggingface.co/datasets/aplux/AIToolKit/blob/main/AidLite%20SDK%20Development%20Documents.md).
### Step3: model inference
3.1 Download demo program
```bash
# download demo program
wget https://huggingface.co/aplux/YOLOv5/resolve/main/examples.zip
# unzip
unzip examples.zip
```
3.2 Modify `model_path` to your model path and run demo
```bash
# run qnn demo
python qnn_yolov5_multi.py
# run snpe demo
python snpe2_yolov5_multi.py
```