Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,136 @@
|
|
1 |
-
---
|
2 |
-
license: other
|
3 |
-
license_name: sla0044
|
4 |
-
license_link: >-
|
5 |
-
https://github.com/STMicroelectronics/stm32aimodelzoo/object_detection/yolov8n/LICENSE.md
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: sla0044
|
4 |
+
license_link: >-
|
5 |
+
https://github.com/STMicroelectronics/stm32aimodelzoo/object_detection/yolov8n/LICENSE.md
|
6 |
+
pipeline_tag: object-detection
|
7 |
+
---
|
8 |
+
# Yolov8n object detection quantized
|
9 |
+
|
10 |
+
## **Use case** : `Object detection`
|
11 |
+
|
12 |
+
# Model description
|
13 |
+
|
14 |
+
Yolov8n is a lightweight and efficient object detection model designed for instance segmentation tasks. It is part of the YOLO (You Only Look Once) family of models, known for their real-time object detection capabilities. The "n" in Yolov8n_seg indicates that it is a nano version, optimized for speed and resource efficiency, making it suitable for deployment on devices with limited computational power, such as mobile devices and embedded systems.
|
15 |
+
|
16 |
+
Yolov8n is implemented in Pytorch by Ultralytics and is quantized in int8 format using tensorflow lite converter.
|
17 |
+
|
18 |
+
## Network information
|
19 |
+
|
20 |
+
|
21 |
+
| Network information | Value |
|
22 |
+
|-------------------------|-----------------|
|
23 |
+
| Framework | TensorFlow Lite |
|
24 |
+
| Quantization | int8 |
|
25 |
+
| Provenance | https://docs.ultralytics.com/tasks/detect/ |
|
26 |
+
|
27 |
+
|
28 |
+
## Networks inputs / outputs
|
29 |
+
|
30 |
+
With an image resolution of NxM and K classes to detect:
|
31 |
+
|
32 |
+
| Input Shape | Description |
|
33 |
+
| ----- | ----------- |
|
34 |
+
| (1, N, M, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |
|
35 |
+
|
36 |
+
| Output Shape | Description |
|
37 |
+
| ----- | ----------- |
|
38 |
+
| (1, 4+K, F) | FLOAT values Where F = (N/8)^2 + (N/16)^2 + (N/32)^2 is the 3 concatenated feature maps |
|
39 |
+
|
40 |
+
|
41 |
+
## Recommended Platforms
|
42 |
+
|
43 |
+
|
44 |
+
| Platform | Supported | Recommended |
|
45 |
+
|----------|-----------|-------------|
|
46 |
+
| STM32L0 | [] | [] |
|
47 |
+
| STM32L4 | [] | [] |
|
48 |
+
| STM32U5 | [] | [] |
|
49 |
+
| STM32H7 | [] | [] |
|
50 |
+
| STM32MP1 | [] | [] |
|
51 |
+
| STM32MP2 | [x] | [x] |
|
52 |
+
| STM32N6 | [x] | [x] |
|
53 |
+
|
54 |
+
|
55 |
+
# Performances
|
56 |
+
|
57 |
+
## Metrics
|
58 |
+
|
59 |
+
Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
|
60 |
+
|
61 |
+
|
62 |
+
### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
|
63 |
+
|Model | Dataset | Format | Resolution | Series | Internal RAM | External RAM | Weights Flash | STM32Cube.AI version | STEdgeAI Core version |
|
64 |
+
|----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
65 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_192_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6 | 697.5 | 0.0 | 2965.61 | 10.0.0 | 2.0.0 |
|
66 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6 | 1626 | 0.0 | 2970.13 | 10.0.0 | 2.0.0 |
|
67 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6 | 2162.5 | 0.0 | 2975.99 | 10.0.0 | 2.0.0 |
|
68 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_416_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 416x416x3 | STM32N6 | 2704 | 0.0 | 2987.52 | 10.0.0 | 2.0.0 |
|
69 |
+
|
70 |
+
|
71 |
+
### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
|
72 |
+
| Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
|
73 |
+
|--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
74 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_192_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6570-DK | NPU/MCU | 18.91 | 52.89 | 10.0.0 | 2.0.0 |
|
75 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 28.6 | 34.97 | 10.0.0 | 2.0.0 |
|
76 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 38.32 | 26.09 | 10.0.0 | 2.0.0 |
|
77 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_416_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 416x416x3 | STM32N6570-DK | NPU/MCU | 63.03 | 15.86 | 10.0.0 | 2.0.0 |
|
78 |
+
|
79 |
+
|
80 |
+
### Reference **MPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
|
81 |
+
Model | Format | Resolution | Quantization | Board | Execution Engine | Frequency | Inference time (ms) | %NPU | %GPU | %CPU | X-LINUX-AI version | Framework |
|
82 |
+
|-----------|--------|------------|---------------|-------------------|------------------|-----------|---------------------|-------|-------|------|--------------------|-----------------------|
|
83 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-channel** | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 102.8 ms | 11.70 | 88.30 |0 | v5.0.0 | OpenVX |
|
84 |
+
| [YOLOv8n per tensor](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pt_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-tensor | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 17.57 ms | 86.79 | 13.21 |0 | v5.0.0 | OpenVX |
|
85 |
+
|
86 |
+
** **To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization**
|
87 |
+
|
88 |
+
### AP on COCO Person dataset
|
89 |
+
|
90 |
+
|
91 |
+
Dataset details: [link](https://cocodataset.org/#download) , License [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) , Quotation[[1]](#1) , Number of classes: 80, Number of images: 118,287
|
92 |
+
|
93 |
+
|
94 |
+
| Model | Format | Resolution | AP* |
|
95 |
+
|-------|--------|------------|----------------|
|
96 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_192_quant_pc_uf_od_coco-person.tflite) | Int8 | 192x192x3 | 56.90 % |
|
97 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_od_coco-person.tflite) | Int8 | 256x256x3 | 62.60 % |
|
98 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite) | Int8 | 320x320x3 | 66.20 % |
|
99 |
+
| [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_416_quant_pc_uf_od_coco-person.tflite) | Int8 | 416x416x3 | 68.90 % |
|
100 |
+
|
101 |
+
\* EVAL_IOU = 0.4, NMS_THRESH = 0.5, SCORE_THRESH =0.001
|
102 |
+
|
103 |
+
## Integration in a simple example and other services support:
|
104 |
+
|
105 |
+
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services).
|
106 |
+
The models are stored in the Ultralytics repository. You can find them at the following link: [Ultralytics YOLOv8-STEdgeAI Models](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/).
|
107 |
+
|
108 |
+
Please refer to the [Ultralytics documentation](https://docs.ultralytics.com/tasks/pose/#train) to retrain the models.
|
109 |
+
|
110 |
+
# References
|
111 |
+
|
112 |
+
<a id="1">[1]</a>
|
113 |
+
“Microsoft COCO: Common Objects in Context”. [Online]. Available: https://cocodataset.org/#download.
|
114 |
+
@article{DBLP:journals/corr/LinMBHPRDZ14,
|
115 |
+
author = {Tsung{-}Yi Lin and
|
116 |
+
Michael Maire and
|
117 |
+
Serge J. Belongie and
|
118 |
+
Lubomir D. Bourdev and
|
119 |
+
Ross B. Girshick and
|
120 |
+
James Hays and
|
121 |
+
Pietro Perona and
|
122 |
+
Deva Ramanan and
|
123 |
+
Piotr Doll{'{a} }r and
|
124 |
+
C. Lawrence Zitnick},
|
125 |
+
title = {Microsoft {COCO:} Common Objects in Context},
|
126 |
+
journal = {CoRR},
|
127 |
+
volume = {abs/1405.0312},
|
128 |
+
year = {2014},
|
129 |
+
url = {http://arxiv.org/abs/1405.0312},
|
130 |
+
archivePrefix = {arXiv},
|
131 |
+
eprint = {1405.0312},
|
132 |
+
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
|
133 |
+
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
|
134 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
135 |
+
}
|
136 |
+
|