pedrogengo commited on
Commit
7475071
1 Parent(s): dd4916d

Upload 3 files

Browse files
Files changed (3) hide show
  1. pymatting.md +0 -0
  2. transformers.md +173 -0
  3. yolo.md +323 -0
pymatting.md ADDED
The diff for this file is too large to render. See raw diff
 
transformers.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PyMatting: A Python Library for Alpha Matting
2
+ [![License: MIT](https://img.shields.io/github/license/pymatting/pymatting?color=brightgreen)](https://opensource.org/licenses/MIT)
3
+ [![CI](https://img.shields.io/github/actions/workflow/status/pymatting/pymatting/.github/workflows/tests.yml?branch=master)](https://github.com/pymatting/pymatting/actions?query=workflow%3Atests)
4
+ [![PyPI](https://img.shields.io/pypi/v/pymatting)](https://pypi.org/project/PyMatting/)
5
+ [![JOSS](https://joss.theoj.org/papers/9766cab65bfbf07a70c8a835edd3875a/status.svg)](https://joss.theoj.org/papers/9766cab65bfbf07a70c8a835edd3875a)
6
+ [![Gitter](https://img.shields.io/gitter/room/pymatting/pymatting)](https://gitter.im/pymatting/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
7
+
8
+ We introduce the PyMatting package for Python which implements various methods to solve the alpha matting problem.
9
+
10
+ - **Website and Documentation:** [https://pymatting.github.io/](https://pymatting.github.io)
11
+ - **Benchmarks:** [https://pymatting.github.io/benchmarks.html](https://pymatting.github.io/benchmarks.html)
12
+
13
+ ![Lemur](https://github.com/pymatting/pymatting/raw/master/data/lemur/lemur_at_the_beach.png)
14
+
15
+ Given an input image and a hand-drawn trimap (top row), alpha matting estimates the alpha channel of a foreground object which can then be composed onto a different background (bottom row).
16
+
17
+ PyMatting provides:
18
+ - Alpha matting implementations for:
19
+ - Closed Form Alpha Matting [[1]](#1)
20
+ - Large Kernel Matting [[2]](#2)
21
+ - KNN Matting [[3]](#3)
22
+ - Learning Based Digital Matting [[4]](#4)
23
+ - Random Walk Matting [[5]](#5)
24
+ - Shared Sampling Matting [[6]](#6)
25
+ - Foreground estimation implementations for:
26
+ - Closed Form Foreground Estimation [[1]](#1)
27
+ - Fast Multi-Level Foreground Estimation (CPU, CUDA and OpenCL) [[7]](#7)
28
+ - Fast multithreaded KNN search
29
+ - Preconditioners to accelerate the convergence rate of conjugate gradient descent:
30
+ - The *incomplete thresholded Cholesky decomposition* (*Incomplete* is part of the name. The implementation is quite complete.)
31
+ - The V-Cycle Geometric Multigrid preconditioner
32
+ - Readable code leveraging [NumPy](https://numpy.org/), [SciPy](https://scipy.org/) and [Numba](http://numba.pydata.org/)
33
+
34
+ ## Getting Started
35
+
36
+ ### Requirements
37
+
38
+ Minimal requirements
39
+ * numpy>=1.16.0
40
+ * pillow>=5.2.0
41
+ * numba>=0.47.0
42
+ * scipy>=1.1.0
43
+
44
+ Additional requirements for GPU support
45
+ * cupy-cuda90>=6.5.0 or similar
46
+ * pyopencl>=2019.1.2
47
+
48
+ Requirements to run the tests
49
+ * pytest>=5.3.4
50
+
51
+ ### Installation with PyPI
52
+
53
+ ```bash
54
+ pip3 install pymatting
55
+ ```
56
+
57
+ ### Installation from Source
58
+
59
+ ```bash
60
+ git clone https://github.com/pymatting/pymatting
61
+ cd pymatting
62
+ pip3 install .
63
+ ```
64
+
65
+ ## Example
66
+ ```python
67
+ from pymatting import cutout
68
+
69
+ cutout(
70
+ # input image path
71
+ "data/lemur/lemur.png",
72
+ # input trimap path
73
+ "data/lemur/lemur_trimap.png",
74
+ # output cutout path
75
+ "lemur_cutout.png")
76
+ ```
77
+
78
+ [More advanced examples](https://pymatting.github.io/examples.html)
79
+
80
+ ## Trimap Construction
81
+
82
+ All implemented methods rely on trimaps which roughly classify the image into foreground, background and unknown regions.
83
+ Trimaps are expected to be `numpy.ndarrays` of type `np.float64` having the same shape as the input image with only one color-channel.
84
+ Trimap values of 0.0 denote pixels which are 100% background.
85
+ Similarly, trimap values of 1.0 denote pixels which are 100% foreground.
86
+ All other values indicate unknown pixels which will be estimated by the algorithm.
87
+
88
+
89
+ ## Testing
90
+
91
+ Run the tests from the main directory:
92
+ ```
93
+ python3 tests/download_images.py
94
+ pip3 install -r requirements_tests.txt
95
+ pytest
96
+ ```
97
+
98
+ Currently 89% of the code is covered by tests.
99
+
100
+ ## Upgrade
101
+
102
+ ```bash
103
+ pip3 install --upgrade pymatting
104
+ python3 -c "import pymatting"
105
+ ```
106
+
107
+ ## Bug Reports, Questions and Pull-Requests
108
+
109
+ Please, see [our community guidelines](https://github.com/pymatting/pymatting/blob/master/CONTRIBUTING.md).
110
+
111
+ ## Authors
112
+
113
+ - **Thomas Germer**
114
+ - **Tobias Uelwer**
115
+ - **Stefan Conrad**
116
+ - **Stefan Harmeling**
117
+
118
+ See also the list of [contributors](https://github.com/pymatting/pymatting/contributors) who participated in this project.
119
+
120
+ ## Projects using PyMatting
121
+
122
+ * [Rembg](https://github.com/danielgatis/rembg) - an excellent tool for removing image backgrounds.
123
+ * [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) - a library for a wide range of image segmentation tasks.
124
+ * [chaiNNer](https://github.com/chaiNNer-org/chaiNNer) - a node-based image processing GUI.
125
+ * [LSA-Matting](https://github.com/kfeng123/LSA-Matting) - improving deep image matting via local smoothness assumption.
126
+
127
+ ## License
128
+
129
+ This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
130
+
131
+ ## Citing
132
+
133
+ If you found PyMatting to be useful for your work, please consider citing our [paper](https://doi.org/10.21105/joss.02481):
134
+
135
+ ```
136
+ @article{Germer2020,
137
+ doi = {10.21105/joss.02481},
138
+ url = {https://doi.org/10.21105/joss.02481},
139
+ year = {2020},
140
+ publisher = {The Open Journal},
141
+ volume = {5},
142
+ number = {54},
143
+ pages = {2481},
144
+ author = {Thomas Germer and Tobias Uelwer and Stefan Conrad and Stefan Harmeling},
145
+ title = {PyMatting: A Python Library for Alpha Matting},
146
+ journal = {Journal of Open Source Software}
147
+ }
148
+ ```
149
+
150
+ ## References
151
+
152
+ <a id="1">[1]</a>
153
+ Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE transactions on pattern analysis and machine intelligence, 30(2):228–242, 2007.
154
+
155
+ <a id="2">[2]</a>
156
+ Kaiming He, Jian Sun, and Xiaoou Tang. Fast matting using large kernel matting laplacian matrices. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2165–2172. IEEE, 2010.
157
+
158
+ <a id="3">[3]</a>
159
+ Qifeng Chen, Dingzeyu Li, and Chi-Keung Tang. Knn matting. IEEE transactions on pattern analysis and machine intelligence, 35(9):2175–2188, 2013.
160
+
161
+ <a id="4">[4]</a>
162
+ Yuanjie Zheng and Chandra Kambhamettu. Learning based digital matting. In 2009 IEEE 12th international conference on computer vision, 889–896. IEEE, 2009.
163
+
164
+ <a id="5">[5]</a>
165
+ Leo Grady, Thomas Schiwietz, Shmuel Aharon, and Rüdiger Westermann. Random walks for interactive alpha-matting. In Proceedings of VIIP, volume 2005, 423–429. 2005.
166
+
167
+ <a id="6">[6]</a>
168
+ Eduardo S. L. Gastal and Manuel M. Oliveira. "Shared Sampling for Real-Time Alpha Matting". Computer Graphics Forum. Volume 29 (2010), Number 2, Proceedings of Eurographics 2010, pp. 575-584.
169
+
170
+ <a id="7">[7]</a>
171
+ Germer, T., Uelwer, T., Conrad, S., & Harmeling, S. (2020). Fast Multi-Level Foreground Estimation. arXiv preprint arXiv:2006.14970.
172
+
173
+ Lemur image by Mathias Appel from https://www.flickr.com/photos/mathiasappel/25419442300/ licensed under [CC0 1.0 Universal (CC0 1.0) Public Domain License](https://creativecommons.org/publicdomain/zero/1.0/).
yolo.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv9
2
+
3
+ Implementation of paper - [YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information](https://arxiv.org/abs/2402.13616)
4
+
5
+ [![arxiv.org](http://img.shields.io/badge/cs.CV-arXiv%3A2402.13616-B31B1B.svg)](https://arxiv.org/abs/2402.13616)
6
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/kadirnar/Yolov9)
7
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/merve/yolov9)
8
+ [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov9-object-detection-on-custom-dataset.ipynb)
9
+ [![OpenCV](https://img.shields.io/badge/OpenCV-BlogPost-black?logo=opencv&labelColor=blue&color=black)](https://learnopencv.com/yolov9-advancing-the-yolo-legacy/)
10
+
11
+ <div align="center">
12
+ <a href="./">
13
+ <img src="./figure/performance.png" width="79%"/>
14
+ </a>
15
+ </div>
16
+
17
+
18
+ ## Performance
19
+
20
+ MS COCO
21
+
22
+ | Model | Test Size | AP<sup>val</sup> | AP<sub>50</sub><sup>val</sup> | AP<sub>75</sub><sup>val</sup> | Param. | FLOPs |
23
+ | :-- | :-: | :-: | :-: | :-: | :-: | :-: |
24
+ | [**YOLOv9-T**]() | 640 | **38.3%** | **53.1%** | **41.3%** | **2.0M** | **7.7G** |
25
+ | [**YOLOv9-S**]() | 640 | **46.8%** | **63.4%** | **50.7%** | **7.1M** | **26.4G** |
26
+ | [**YOLOv9-M**]() | 640 | **51.4%** | **68.1%** | **56.1%** | **20.0M** | **76.3G** |
27
+ | [**YOLOv9-C**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-c-converted.pt) | 640 | **53.0%** | **70.2%** | **57.8%** | **25.3M** | **102.1G** |
28
+ | [**YOLOv9-E**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-e-converted.pt) | 640 | **55.6%** | **72.8%** | **60.6%** | **57.3M** | **189.0G** |
29
+ <!-- | [**YOLOv9 (ReLU)**]() | 640 | **51.9%** | **69.1%** | **56.5%** | **25.3M** | **102.1G** | -->
30
+
31
+ <!-- tiny, small, and medium models will be released after the paper be accepted and published. -->
32
+
33
+ ## Useful Links
34
+
35
+ <details><summary> <b>Expand</b> </summary>
36
+
37
+ Custom training: https://github.com/WongKinYiu/yolov9/issues/30#issuecomment-1960955297
38
+
39
+ ONNX export: https://github.com/WongKinYiu/yolov9/issues/2#issuecomment-1960519506 https://github.com/WongKinYiu/yolov9/issues/40#issue-2150697688 https://github.com/WongKinYiu/yolov9/issues/130#issue-2162045461
40
+
41
+ TensorRT inference: https://github.com/WongKinYiu/yolov9/issues/143#issuecomment-1975049660 https://github.com/WongKinYiu/yolov9/issues/34#issue-2150393690 https://github.com/WongKinYiu/yolov9/issues/79#issue-2153547004 https://github.com/WongKinYiu/yolov9/issues/143#issue-2164002309
42
+
43
+ QAT TensirRT: https://github.com/WongKinYiu/yolov9/issues/253#issue-2189520073
44
+
45
+ OpenVINO: https://github.com/WongKinYiu/yolov9/issues/164#issue-2168540003
46
+
47
+ C# ONNX inference: https://github.com/WongKinYiu/yolov9/issues/95#issue-2155974619
48
+
49
+ C# OpenVINO inference: https://github.com/WongKinYiu/yolov9/issues/95#issuecomment-1968131244
50
+
51
+ OpenCV: https://github.com/WongKinYiu/yolov9/issues/113#issuecomment-1971327672
52
+
53
+ Hugging Face demo: https://github.com/WongKinYiu/yolov9/issues/45#issuecomment-1961496943
54
+
55
+ CoLab demo: https://github.com/WongKinYiu/yolov9/pull/18
56
+
57
+ ONNXSlim export: https://github.com/WongKinYiu/yolov9/pull/37
58
+
59
+ YOLOv9 ROS: https://github.com/WongKinYiu/yolov9/issues/144#issue-2164210644
60
+
61
+ YOLOv9 ROS TensorRT: https://github.com/WongKinYiu/yolov9/issues/145#issue-2164218595
62
+
63
+ YOLOv9 Julia: https://github.com/WongKinYiu/yolov9/issues/141#issuecomment-1973710107
64
+
65
+ YOLOv9 MLX: https://github.com/WongKinYiu/yolov9/issues/258#issue-2190586540
66
+
67
+ YOLOv9 ByteTrack: https://github.com/WongKinYiu/yolov9/issues/78#issue-2153512879
68
+
69
+ YOLOv9 DeepSORT: https://github.com/WongKinYiu/yolov9/issues/98#issue-2156172319
70
+
71
+ YOLOv9 counting: https://github.com/WongKinYiu/yolov9/issues/84#issue-2153904804
72
+
73
+ YOLOv9 face detection: https://github.com/WongKinYiu/yolov9/issues/121#issue-2160218766
74
+
75
+ YOLOv9 segmentation onnxruntime: https://github.com/WongKinYiu/yolov9/issues/151#issue-2165667350
76
+
77
+ Comet logging: https://github.com/WongKinYiu/yolov9/pull/110
78
+
79
+ MLflow logging: https://github.com/WongKinYiu/yolov9/pull/87
80
+
81
+ AnyLabeling tool: https://github.com/WongKinYiu/yolov9/issues/48#issue-2152139662
82
+
83
+ AX650N deploy: https://github.com/WongKinYiu/yolov9/issues/96#issue-2156115760
84
+
85
+ Conda environment: https://github.com/WongKinYiu/yolov9/pull/93
86
+
87
+ AutoDL docker environment: https://github.com/WongKinYiu/yolov9/issues/112#issue-2158203480
88
+
89
+ </details>
90
+
91
+
92
+ ## Installation
93
+
94
+ Docker environment (recommended)
95
+ <details><summary> <b>Expand</b> </summary>
96
+
97
+ ``` shell
98
+ # create the docker container, you can change the share memory size if you have more.
99
+ nvidia-docker run --name yolov9 -it -v your_coco_path/:/coco/ -v your_code_path/:/yolov9 --shm-size=64g nvcr.io/nvidia/pytorch:21.11-py3
100
+
101
+ # apt install required packages
102
+ apt update
103
+ apt install -y zip htop screen libgl1-mesa-glx
104
+
105
+ # pip install required packages
106
+ pip install seaborn thop
107
+
108
+ # go to code folder
109
+ cd /yolov9
110
+ ```
111
+
112
+ </details>
113
+
114
+
115
+ ## Evaluation
116
+
117
+ [`yolov9-c-converted.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-c-converted.pt) [`yolov9-e-converted.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-e-converted.pt) [`yolov9-c.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-c.pt) [`yolov9-e.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-e.pt) [`gelan-c.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c.pt) [`gelan-e.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-e.pt)
118
+
119
+ ``` shell
120
+ # evaluate converted yolov9 models
121
+ python val.py --data data/coco.yaml --img 640 --batch 32 --conf 0.001 --iou 0.7 --device 0 --weights './yolov9-c-converted.pt' --save-json --name yolov9_c_c_640_val
122
+
123
+ # evaluate yolov9 models
124
+ # python val_dual.py --data data/coco.yaml --img 640 --batch 32 --conf 0.001 --iou 0.7 --device 0 --weights './yolov9-c.pt' --save-json --name yolov9_c_640_val
125
+
126
+ # evaluate gelan models
127
+ # python val.py --data data/coco.yaml --img 640 --batch 32 --conf 0.001 --iou 0.7 --device 0 --weights './gelan-c.pt' --save-json --name gelan_c_640_val
128
+ ```
129
+
130
+ You will get the results:
131
+
132
+ ```
133
+ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.530
134
+ Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.702
135
+ Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.578
136
+ Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.362
137
+ Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.585
138
+ Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.693
139
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.392
140
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.652
141
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.702
142
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.541
143
+ Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.760
144
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.844
145
+ ```
146
+
147
+
148
+ ## Training
149
+
150
+ Data preparation
151
+
152
+ ``` shell
153
+ bash scripts/get_coco.sh
154
+ ```
155
+
156
+ * Download MS COCO dataset images ([train](http://images.cocodataset.org/zips/train2017.zip), [val](http://images.cocodataset.org/zips/val2017.zip), [test](http://images.cocodataset.org/zips/test2017.zip)) and [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip). If you have previously used a different version of YOLO, we strongly recommend that you delete `train2017.cache` and `val2017.cache` files, and redownload [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip)
157
+
158
+ Single GPU training
159
+
160
+ ``` shell
161
+ # train yolov9 models
162
+ python train_dual.py --workers 8 --device 0 --batch 16 --data data/coco.yaml --img 640 --cfg models/detect/yolov9-c.yaml --weights '' --name yolov9-c --hyp hyp.scratch-high.yaml --min-items 0 --epochs 500 --close-mosaic 15
163
+
164
+ # train gelan models
165
+ # python train.py --workers 8 --device 0 --batch 32 --data data/coco.yaml --img 640 --cfg models/detect/gelan-c.yaml --weights '' --name gelan-c --hyp hyp.scratch-high.yaml --min-items 0 --epochs 500 --close-mosaic 15
166
+ ```
167
+
168
+ Multiple GPU training
169
+
170
+ ``` shell
171
+ # train yolov9 models
172
+ python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train_dual.py --workers 8 --device 0,1,2,3,4,5,6,7 --sync-bn --batch 128 --data data/coco.yaml --img 640 --cfg models/detect/yolov9-c.yaml --weights '' --name yolov9-c --hyp hyp.scratch-high.yaml --min-items 0 --epochs 500 --close-mosaic 15
173
+
174
+ # train gelan models
175
+ # python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --workers 8 --device 0,1,2,3 --sync-bn --batch 128 --data data/coco.yaml --img 640 --cfg models/detect/gelan-c.yaml --weights '' --name gelan-c --hyp hyp.scratch-high.yaml --min-items 0 --epochs 500 --close-mosaic 15
176
+ ```
177
+
178
+
179
+ ## Re-parameterization
180
+
181
+ See [reparameterization.ipynb](https://github.com/WongKinYiu/yolov9/blob/main/tools/reparameterization.ipynb).
182
+
183
+
184
+ ## Inference
185
+
186
+ <div align="center">
187
+ <a href="./">
188
+ <img src="./figure/horses_prediction.jpg" width="49%"/>
189
+ </a>
190
+ </div>
191
+
192
+ ``` shell
193
+ # inference converted yolov9 models
194
+ python detect.py --source './data/images/horses.jpg' --img 640 --device 0 --weights './yolov9-c-converted.pt' --name yolov9_c_c_640_detect
195
+
196
+ # inference yolov9 models
197
+ # python detect_dual.py --source './data/images/horses.jpg' --img 640 --device 0 --weights './yolov9-c.pt' --name yolov9_c_640_detect
198
+
199
+ # inference gelan models
200
+ # python detect.py --source './data/images/horses.jpg' --img 640 --device 0 --weights './gelan-c.pt' --name gelan_c_c_640_detect
201
+ ```
202
+
203
+
204
+ ## Citation
205
+
206
+ ```
207
+ @article{wang2024yolov9,
208
+ title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},
209
+ author={Wang, Chien-Yao and Liao, Hong-Yuan Mark},
210
+ booktitle={arXiv preprint arXiv:2402.13616},
211
+ year={2024}
212
+ }
213
+ ```
214
+
215
+ ```
216
+ @article{chang2023yolor,
217
+ title={{YOLOR}-Based Multi-Task Learning},
218
+ author={Chang, Hung-Shuo and Wang, Chien-Yao and Wang, Richard Robert and Chou, Gene and Liao, Hong-Yuan Mark},
219
+ journal={arXiv preprint arXiv:2309.16921},
220
+ year={2023}
221
+ }
222
+ ```
223
+
224
+
225
+ ## Teaser
226
+
227
+ Parts of code of [YOLOR-Based Multi-Task Learning](https://arxiv.org/abs/2309.16921) are released in the repository.
228
+
229
+ <div align="center">
230
+ <a href="./">
231
+ <img src="./figure/multitask.png" width="99%"/>
232
+ </a>
233
+ </div>
234
+
235
+ #### Object Detection
236
+
237
+ [`gelan-c-det.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c-det.pt)
238
+
239
+ `object detection`
240
+
241
+ ``` shell
242
+ # coco/labels/{split}/*.txt
243
+ # bbox or polygon (1 instance 1 line)
244
+ python train.py --workers 8 --device 0 --batch 32 --data data/coco.yaml --img 640 --cfg models/detect/gelan-c.yaml --weights '' --name gelan-c-det --hyp hyp.scratch-high.yaml --min-items 0 --epochs 300 --close-mosaic 10
245
+ ```
246
+
247
+ | Model | Test Size | Param. | FLOPs | AP<sup>box</sup> |
248
+ | :-- | :-: | :-: | :-: | :-: |
249
+ | [**GELAN-C-DET**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c-det.pt) | 640 | 25.3M | 102.1G |**52.3%** |
250
+ | [**YOLOv9-C-DET**]() | 640 | 25.3M | 102.1G | **53.0%** |
251
+
252
+ #### Instance Segmentation
253
+
254
+ [`gelan-c-seg.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c-seg.pt)
255
+
256
+ `object detection` `instance segmentation`
257
+
258
+ ``` shell
259
+ # coco/labels/{split}/*.txt
260
+ # polygon (1 instance 1 line)
261
+ python segment/train.py --workers 8 --device 0 --batch 32 --data coco.yaml --img 640 --cfg models/segment/gelan-c-seg.yaml --weights '' --name gelan-c-seg --hyp hyp.scratch-high.yaml --no-overlap --epochs 300 --close-mosaic 10
262
+ ```
263
+
264
+ | Model | Test Size | Param. | FLOPs | AP<sup>box</sup> | AP<sup>mask</sup> |
265
+ | :-- | :-: | :-: | :-: | :-: | :-: |
266
+ | [**GELAN-C-SEG**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c-seg.pt) | 640 | 27.4M | 144.6G | **52.3%** | **42.4%** |
267
+ | [**YOLOv9-C-SEG**]() | 640 | 27.4M | 145.5G | **53.3%** | **43.5%** |
268
+
269
+ #### Panoptic Segmentation
270
+
271
+ [`gelan-c-pan.pt`](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c-pan.pt)
272
+
273
+ `object detection` `instance segmentation` `semantic segmentation` `stuff segmentation` `panoptic segmentation`
274
+
275
+ ``` shell
276
+ # coco/labels/{split}/*.txt
277
+ # polygon (1 instance 1 line)
278
+ # coco/stuff/{split}/*.txt
279
+ # polygon (1 semantic 1 line)
280
+ python panoptic/train.py --workers 8 --device 0 --batch 32 --data coco.yaml --img 640 --cfg models/panoptic/gelan-c-pan.yaml --weights '' --name gelan-c-pan --hyp hyp.scratch-high.yaml --no-overlap --epochs 300 --close-mosaic 10
281
+ ```
282
+
283
+ | Model | Test Size | Param. | FLOPs | AP<sup>box</sup> | AP<sup>mask</sup> | mIoU<sub>164k/10k</sub><sup>semantic</sup> | mIoU<sup>stuff</sup> | PQ<sup>panoptic</sup> |
284
+ | :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
285
+ | [**GELAN-C-PAN**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c-pan.pt) | 640 | 27.6M | 146.7G | **52.6%** | **42.5%** | **39.0%/48.3%** | **52.7%** | **39.4%** |
286
+ <!--| [**YOLOv9-C-PAN**]() | 640 | 28.8M | 187.0G | **%** | **%** | **** | **%** | **%** |-->
287
+
288
+ #### Image Captioning (not yet released)
289
+
290
+ <!--[`gelan-c-cap.pt`]()-->
291
+
292
+ `object detection` `instance segmentation` `semantic segmentation` `stuff segmentation` `panoptic segmentation` `image captioning`
293
+
294
+ ``` shell
295
+ # coco/labels/{split}/*.txt
296
+ # polygon (1 instance 1 line)
297
+ # coco/stuff/{split}/*.txt
298
+ # polygon (1 semantic 1 line)
299
+ # coco/annotations/*.json
300
+ # json (1 split 1 file)
301
+ python caption/train.py --workers 8 --device 0 --batch 32 --data coco.yaml --img 640 --cfg models/caption/gelan-c-cap.yaml --weights '' --name gelan-c-cap --hyp hyp.scratch-high.yaml --no-overlap --epochs 300 --close-mosaic 10
302
+ ```
303
+
304
+ | Model | Test Size | AP<sup>box</sup> | AP<sup>mask</sup> | mIoU<sup>semantic</sup> | mIoU<sup>stuff</sup> | PQ<sup>panoptic</sup> | BLEU@4<sup>caption</sup> | CIDEr<sup>caption</sup> |
305
+ | :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
306
+ | [**YOLOR-MT**]() | 640 | **51.0%** | **41.7%** | **49.6%** | **55.9%** | **40.5%** | **35.7** | **112.7** |
307
+ <!--| [**GELAN-C-CAP**]() | 640 | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
308
+ | [**YOLOv9-C-CAP**]() | 640 | **-** | **-** | **-** | **-** | **-** | **-** | **-** |-->
309
+
310
+
311
+ ## Acknowledgements
312
+
313
+ <details><summary> <b>Expand</b> </summary>
314
+
315
+ * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)
316
+ * [https://github.com/WongKinYiu/yolor](https://github.com/WongKinYiu/yolor)
317
+ * [https://github.com/WongKinYiu/yolov7](https://github.com/WongKinYiu/yolov7)
318
+ * [https://github.com/VDIGPKU/DynamicDet](https://github.com/VDIGPKU/DynamicDet)
319
+ * [https://github.com/DingXiaoH/RepVGG](https://github.com/DingXiaoH/RepVGG)
320
+ * [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)
321
+ * [https://github.com/meituan/YOLOv6](https://github.com/meituan/YOLOv6)
322
+
323
+ </details>