Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
lazarevich
commited on
update description
Browse files
app.py
CHANGED
@@ -82,7 +82,7 @@ with gr.Blocks(
|
|
82 |
|
83 |
๐ธ includes architectures from YOLOv3 to YOLOv8, <br>
|
84 |
๐ธ trained on <span style="font-weight:bold">four</span> popular object detection datasets (COCO, VOC, WIDER FACE, SKU-110k), <br>
|
85 |
-
๐ธ latency measured on <span style="font-weight:bold">
|
86 |
๐ธ all models are trained with <span style="font-weight:bold">the same</span> training loop and hyperparameters (as implemented in the [Ultralytics YOLOv8 codebase](https://github.com/ultralytics/ultralytics)), <br>
|
87 |
๐ธ both <span style="font-weight:bold">the detection head structure</span> and <span style="font-weight:bold"> the loss function </span> used are that of YOLOv8, giving a chance to isolate the contribution of the backbone/neck architecture on the latency-accuracy trade-off of YOLO models. <br>
|
88 |
In particular, we show that older backbone/neck structures like those of YOLOv3 and YOLOv4 are still competitive compared to more recent architectures in a controlled environment. For more details, please refer to the [arXiv preprint](https://arxiv.org/abs/2307.13901) and the [codebase](https://github.com/Deeplite/deeplite-torch-zoo).
|
|
|
82 |
|
83 |
๐ธ includes architectures from YOLOv3 to YOLOv8, <br>
|
84 |
๐ธ trained on <span style="font-weight:bold">four</span> popular object detection datasets (COCO, VOC, WIDER FACE, SKU-110k), <br>
|
85 |
+
๐ธ latency measured on <span style="font-weight:bold">a growing list of hardware platforms</span> (examples include Jetson Nano GPU, ARM CPU, Intel CPU, Khadas VIM3 NPU, Orange Pi NPU), <br>
|
86 |
๐ธ all models are trained with <span style="font-weight:bold">the same</span> training loop and hyperparameters (as implemented in the [Ultralytics YOLOv8 codebase](https://github.com/ultralytics/ultralytics)), <br>
|
87 |
๐ธ both <span style="font-weight:bold">the detection head structure</span> and <span style="font-weight:bold"> the loss function </span> used are that of YOLOv8, giving a chance to isolate the contribution of the backbone/neck architecture on the latency-accuracy trade-off of YOLO models. <br>
|
88 |
In particular, we show that older backbone/neck structures like those of YOLOv3 and YOLOv4 are still competitive compared to more recent architectures in a controlled environment. For more details, please refer to the [arXiv preprint](https://arxiv.org/abs/2307.13901) and the [codebase](https://github.com/Deeplite/deeplite-torch-zoo).
|