File size: 6,449 Bytes
7db008b
 
 
5eec3ee
bd4198b
1c6e3bd
bd4198b
 
 
 
 
c3cb036
0b1c8a7
c3cb036
0debee3
4bc3c53
c3cb036
0b1c8a7
 
 
 
7df8fa6
 
0b1c8a7
 
 
 
 
 
 
 
 
 
 
 
 
c3cb036
81d083b
a30994c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c21f902
c3cb036
81d083b
 
 
1dee2f5
 
a30994c
fc7a6d1
a30994c
 
 
 
 
 
 
 
fc7a6d1
a30994c
 
 
 
 
 
1c6e3bd
4bc3c53
1c6e3bd
6c5c4e6
435d163
b3ed2d0
435d163
b3ed2d0
435d163
b3ed2d0
435d163
2389ceb
 
6c5c4e6
435d163
b3ed2d0
 
 
 
 
 
 
 
 
 
435d163
cd40327
81be015
cd40327
 
 
 
81be015
435d163
39d1693
cd40327
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c1fef5b9d81735a12c3fcc/sebNQgVO1hUapijWvVwTl.jpeg" width=600>

# YOLOv5: Target Detection

Yolov5 is a one-stage structure target detection network framework, in which the main structure consists of 4 parts, including the network backbone composed of modified CSPNet, the high-resolution feature fusion module composed of FPN (Feature Paramid Network), composed of SPP (Spatial Pyramid Pooling) constitutes a pooling module, and three different detection heads are used to detect targets of different sizes.

The YOLOv5 model can be found [here](https://github.com/ultralytics/yolov5)

## CONTENTS
  - [Source Model](#source-model)
  - [Performance](#performance)
  - [Model Conversion](#model-conversion)
  - [Tutorial](#tutorial)

## Source Model

The steps followed the [yolov5 tutorials](https://docs.ultralytics.com/yolov5/tutorials/model_export/) to get the source model in ONNX format.

> The source model **YOLOv5s.onnx** also can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx).

**Environment Preparation**
```bash
git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install
```

**Export to ONNX**

```bash
python export.py --weights yolov5s.pt --include torchscript onnx --opset 12
```

## Performance

<center><b>🧰QCS6490</b></center>

|Device|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|6.7|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int8_qnn/cutoff_yolov5s_int8.qnn.serialized.bin)|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|15.2|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int16_qnn/cutoff_yolov5s_int16.qnn.serialized.bin)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|5.5|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int8_htp_snpe2/cutoff_yolov5s_int8_htp_snpe2.dlc)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|13.4|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS6490/cutoff_yolov5s_int16_htp_snpe2/cutoff_yolov5s_int16_htp_snpe2.dlc)|

<center><b>🧰QCS8550</b></center>

|Device|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|4.1|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int8_qnn/cutoff_yolov5s_640_int8.qnn.serialized.bin)|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|13.4|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int16_qnn/cutoff_yolov5s_640_int16.qnn.serialized.bin)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|2.3|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int8_htp_snpe2/cutoff_yolov5s_int8_htp_snpe2.dlc)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|5.8|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/models/QCS8550/cutoff_yolov5s_int16_htp_snpe2/cutoff_yolov5s_int16_htp_snpe2.dlc)|

## Model Conversion

Demo models converted from [**AIMO(AI Model Optimizier)**](https://aidlux.com/en/product/aimo).

The demo model conversion step on AIMO can be found blow:

<center><b>🧰QCS6490</b></center>

|Device|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_qnn_int8.png)|
|AidBox QCS6490|QNN|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_qnn_int16.png)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_snpe_int8.png)|
|AidBox QCS6490|SNPE|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS6490/aimo_yolov5s_snpe_int16.png)|

<center><b>🧰QCS8550</b></center>

|Device|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_qnn_int8.png)|
|APLUX QCS8550|QNN|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_qnn_int16.png)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_snpe_int8.png)|
|APLUX QCS8550|SNPE|YOLOv5s(cutoff)|640|INT16|NPU|[view steps](https://huggingface.co/aplux/YOLOv5/blob/main/aimo/QCS8550/aimo_yolov5s_snpe_int16.png)|

## Tutorial

### Step1: convert model

1.1 Prepare source model in onnx format. The source model can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx) or following [Source Model](#source-model) to obtain.

1.2 Login [AIMO](https://aidlux.com/en/product/aimo) and convert source model to target format. The model conversion step can follow **AIMO Conversion Step** in [Model Conversion Sheet](#model-conversion).

1.3 After conversion task done, download target model file.

> note: you can skip convert model step, and directly download converted model in [Performance Sheet](#performance).

### Step2: install AidLite SDK

```bash
# install aidlite sdk c++ api 
sudo aid-pkg -i aidlite-sdk
# install aidlite sdk python api
python3 -m pip install pyaidlite -i https://mirrors.aidlux.com --trusted-host mirrors.aidlux.com
```

The developer document of AidLite SDK can be found [here](https://huggingface.co/datasets/aplux/AIToolKit/blob/main/AidLite%20SDK%20Development%20Documents.md).

### Step3: model inference

3.1 Download demo program
```bash
# download demo program
wget https://huggingface.co/aplux/YOLOv5/resolve/main/examples.zip
# unzip
unzip examples.zip
```

3.2 Modify `model_path` to your model path and run demo

```bash
# run qnn demo
python qnn_yolov5_multi.py
# run snpe demo
python snpe2_yolov5_multi.py
```