Upload 663 files
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .github/ISSUE_TEMPLATE/bug-report.yml +97 -0
- .github/ISSUE_TEMPLATE/config.yml +16 -0
- .github/ISSUE_TEMPLATE/feature-request.yml +52 -0
- .github/ISSUE_TEMPLATE/question.yml +35 -0
- .github/dependabot.yml +27 -0
- .github/workflows/ci.yaml +359 -0
- .github/workflows/cla.yml +44 -0
- .github/workflows/codeql.yaml +42 -0
- .github/workflows/docker.yaml +203 -0
- .github/workflows/docs.yml +98 -0
- .github/workflows/format.yml +62 -0
- .github/workflows/links.yml +93 -0
- .github/workflows/merge-main-into-prs.yml +87 -0
- .github/workflows/publish.yml +144 -0
- .github/workflows/stale.yml +47 -0
- .gitignore +171 -0
- CITATION.cff +26 -0
- CONTRIBUTING.md +166 -0
- LICENSE +661 -0
- README.md +278 -3
- README.zh-CN.md +278 -0
- docker/Dockerfile +93 -0
- docker/Dockerfile-arm64 +58 -0
- docker/Dockerfile-conda +50 -0
- docker/Dockerfile-cpu +62 -0
- docker/Dockerfile-jetson-jetpack4 +69 -0
- docker/Dockerfile-jetson-jetpack5 +62 -0
- docker/Dockerfile-jetson-jetpack6 +59 -0
- docker/Dockerfile-python +59 -0
- docker/Dockerfile-runner +45 -0
- docs/README.md +146 -0
- docs/build_docs.py +258 -0
- docs/build_reference.py +147 -0
- docs/coming_soon_template.md +34 -0
- docs/en/CNAME +1 -0
- docs/en/datasets/classify/caltech101.md +152 -0
- docs/en/datasets/classify/caltech256.md +146 -0
- docs/en/datasets/classify/cifar10.md +173 -0
- docs/en/datasets/classify/cifar100.md +130 -0
- docs/en/datasets/classify/fashion-mnist.md +139 -0
- docs/en/datasets/classify/imagenet.md +132 -0
- docs/en/datasets/classify/imagenet10.md +127 -0
- docs/en/datasets/classify/imagenette.md +193 -0
- docs/en/datasets/classify/imagewoof.md +148 -0
- docs/en/datasets/classify/index.md +220 -0
- docs/en/datasets/classify/mnist.md +127 -0
- docs/en/datasets/detect/african-wildlife.md +147 -0
- docs/en/datasets/detect/argoverse.md +153 -0
- docs/en/datasets/detect/brain-tumor.md +168 -0
- docs/en/datasets/detect/coco.md +173 -0
.github/ISSUE_TEMPLATE/bug-report.yml
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
|
3 |
+
name: 🐛 Bug Report
|
4 |
+
# title: " "
|
5 |
+
description: Problems with Ultralytics YOLO
|
6 |
+
labels: [bug, triage]
|
7 |
+
body:
|
8 |
+
- type: markdown
|
9 |
+
attributes:
|
10 |
+
value: |
|
11 |
+
Thank you for submitting an Ultralytics YOLO 🐛 Bug Report!
|
12 |
+
|
13 |
+
- type: checkboxes
|
14 |
+
attributes:
|
15 |
+
label: Search before asking
|
16 |
+
description: >
|
17 |
+
Please search the Ultralytics [Docs](https://docs.ultralytics.com) and [issues](https://github.com/ultralytics/ultralytics/issues) to see if a similar bug report already exists.
|
18 |
+
options:
|
19 |
+
- label: >
|
20 |
+
I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
|
21 |
+
required: true
|
22 |
+
|
23 |
+
- type: dropdown
|
24 |
+
attributes:
|
25 |
+
label: Ultralytics YOLO Component
|
26 |
+
description: |
|
27 |
+
Please select the Ultralytics YOLO component where you found the bug.
|
28 |
+
multiple: true
|
29 |
+
options:
|
30 |
+
- "Install"
|
31 |
+
- "Train"
|
32 |
+
- "Val"
|
33 |
+
- "Predict"
|
34 |
+
- "Export"
|
35 |
+
- "Multi-GPU"
|
36 |
+
- "Augmentation"
|
37 |
+
- "Hyperparameter Tuning"
|
38 |
+
- "Integrations"
|
39 |
+
- "Other"
|
40 |
+
validations:
|
41 |
+
required: false
|
42 |
+
|
43 |
+
- type: textarea
|
44 |
+
attributes:
|
45 |
+
label: Bug
|
46 |
+
description: Please provide as much information as possible. Copy and paste console output and error messages. Use [Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) to format text, code and logs. If necessary, include screenshots for visual elements only. Providing detailed information will help us resolve the issue more efficiently.
|
47 |
+
placeholder: |
|
48 |
+
💡 ProTip! Include as much information as possible (logs, tracebacks, screenshots, etc.) to receive the most helpful response.
|
49 |
+
validations:
|
50 |
+
required: true
|
51 |
+
|
52 |
+
- type: textarea
|
53 |
+
attributes:
|
54 |
+
label: Environment
|
55 |
+
description: Many issues are often related to dependency versions and hardware. Please provide the output of `yolo checks` or `ultralytics.checks()` command to help us diagnose the problem.
|
56 |
+
placeholder: |
|
57 |
+
Paste output of `yolo checks` or `ultralytics.checks()` command, i.e.:
|
58 |
+
```
|
59 |
+
Ultralytics 8.3.2 🚀 Python-3.11.2 torch-2.4.1 CPU (Apple M3)
|
60 |
+
Setup complete ✅ (8 CPUs, 16.0 GB RAM, 266.5/460.4 GB disk)
|
61 |
+
|
62 |
+
OS macOS-13.5.2
|
63 |
+
Environment Jupyter
|
64 |
+
Python 3.11.2
|
65 |
+
Install git
|
66 |
+
RAM 16.00 GB
|
67 |
+
CPU Apple M3
|
68 |
+
CUDA None
|
69 |
+
```
|
70 |
+
validations:
|
71 |
+
required: true
|
72 |
+
|
73 |
+
- type: textarea
|
74 |
+
attributes:
|
75 |
+
label: Minimal Reproducible Example
|
76 |
+
description: >
|
77 |
+
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimal reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/).
|
78 |
+
placeholder: |
|
79 |
+
```
|
80 |
+
# Code to reproduce your issue here
|
81 |
+
```
|
82 |
+
validations:
|
83 |
+
required: true
|
84 |
+
|
85 |
+
- type: textarea
|
86 |
+
attributes:
|
87 |
+
label: Additional
|
88 |
+
description: Anything else you would like to share?
|
89 |
+
|
90 |
+
- type: checkboxes
|
91 |
+
attributes:
|
92 |
+
label: Are you willing to submit a PR?
|
93 |
+
description: >
|
94 |
+
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/ultralytics/pulls) (PR) to help improve Ultralytics YOLO for everyone, especially if you have a good understanding of how to implement a fix or feature.
|
95 |
+
See the Ultralytics YOLO [Contributing Guide](https://docs.ultralytics.com/help/contributing) to get started.
|
96 |
+
options:
|
97 |
+
- label: Yes I'd like to help by submitting a PR!
|
.github/ISSUE_TEMPLATE/config.yml
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
|
3 |
+
blank_issues_enabled: true
|
4 |
+
contact_links:
|
5 |
+
- name: 📄 Docs
|
6 |
+
url: https://docs.ultralytics.com/
|
7 |
+
about: Full Ultralytics YOLO Documentation
|
8 |
+
- name: 💬 Forum
|
9 |
+
url: https://community.ultralytics.com/
|
10 |
+
about: Ask on Ultralytics Community Forum
|
11 |
+
- name: 🎧 Discord
|
12 |
+
url: https://ultralytics.com/discord
|
13 |
+
about: Ask on Ultralytics Discord
|
14 |
+
- name: ⌨️ Reddit
|
15 |
+
url: https://reddit.com/r/ultralytics
|
16 |
+
about: Ask on Ultralytics Subreddit
|
.github/ISSUE_TEMPLATE/feature-request.yml
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
|
3 |
+
name: 🚀 Feature Request
|
4 |
+
description: Suggest an Ultralytics YOLO idea
|
5 |
+
# title: " "
|
6 |
+
labels: [enhancement]
|
7 |
+
body:
|
8 |
+
- type: markdown
|
9 |
+
attributes:
|
10 |
+
value: |
|
11 |
+
Thank you for submitting an Ultralytics 🚀 Feature Request!
|
12 |
+
|
13 |
+
- type: checkboxes
|
14 |
+
attributes:
|
15 |
+
label: Search before asking
|
16 |
+
description: >
|
17 |
+
Please search the Ultralytics [Docs](https://docs.ultralytics.com) and [issues](https://github.com/ultralytics/ultralytics/issues) to see if a similar feature request already exists.
|
18 |
+
options:
|
19 |
+
- label: >
|
20 |
+
I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
|
21 |
+
required: true
|
22 |
+
|
23 |
+
- type: textarea
|
24 |
+
attributes:
|
25 |
+
label: Description
|
26 |
+
description: A short description of your feature.
|
27 |
+
placeholder: |
|
28 |
+
What new feature would you like to see in YOLO?
|
29 |
+
validations:
|
30 |
+
required: true
|
31 |
+
|
32 |
+
- type: textarea
|
33 |
+
attributes:
|
34 |
+
label: Use case
|
35 |
+
description: |
|
36 |
+
Describe the use case of your feature request. It will help us understand and prioritize the feature request.
|
37 |
+
placeholder: |
|
38 |
+
How would this feature be used, and who would use it?
|
39 |
+
|
40 |
+
- type: textarea
|
41 |
+
attributes:
|
42 |
+
label: Additional
|
43 |
+
description: Anything else you would like to share?
|
44 |
+
|
45 |
+
- type: checkboxes
|
46 |
+
attributes:
|
47 |
+
label: Are you willing to submit a PR?
|
48 |
+
description: >
|
49 |
+
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/ultralytics/pulls) (PR) to help improve YOLO for everyone, especially if you have a good understanding of how to implement a fix or feature.
|
50 |
+
See the Ultralytics [Contributing Guide](https://docs.ultralytics.com/help/contributing) to get started.
|
51 |
+
options:
|
52 |
+
- label: Yes I'd like to help by submitting a PR!
|
.github/ISSUE_TEMPLATE/question.yml
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
|
3 |
+
name: ❓ Question
|
4 |
+
description: Ask an Ultralytics YOLO question
|
5 |
+
# title: " "
|
6 |
+
labels: [question]
|
7 |
+
body:
|
8 |
+
- type: markdown
|
9 |
+
attributes:
|
10 |
+
value: |
|
11 |
+
Thank you for asking an Ultralytics YOLO ❓ Question!
|
12 |
+
|
13 |
+
- type: checkboxes
|
14 |
+
attributes:
|
15 |
+
label: Search before asking
|
16 |
+
description: >
|
17 |
+
Please search the Ultralytics [Docs](https://docs.ultralytics.com), [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) to see if a similar question already exists.
|
18 |
+
options:
|
19 |
+
- label: >
|
20 |
+
I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
|
21 |
+
required: true
|
22 |
+
|
23 |
+
- type: textarea
|
24 |
+
attributes:
|
25 |
+
label: Question
|
26 |
+
description: What is your question? Please provide as much information as possible. Include detailed code examples to reproduce the problem and describe the context in which the issue occurs. Format your text and code using [Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for clarity and readability. Following these guidelines will help us assist you more effectively.
|
27 |
+
placeholder: |
|
28 |
+
💡 ProTip! Include as much information as possible (logs, tracebacks, screenshots etc.) to receive the most helpful response.
|
29 |
+
validations:
|
30 |
+
required: true
|
31 |
+
|
32 |
+
- type: textarea
|
33 |
+
attributes:
|
34 |
+
label: Additional
|
35 |
+
description: Anything else you would like to share?
|
.github/dependabot.yml
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Dependabot for package version updates
|
3 |
+
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
|
4 |
+
|
5 |
+
version: 2
|
6 |
+
updates:
|
7 |
+
- package-ecosystem: pip
|
8 |
+
directory: "/"
|
9 |
+
schedule:
|
10 |
+
interval: weekly
|
11 |
+
time: "04:00"
|
12 |
+
open-pull-requests-limit: 10
|
13 |
+
reviewers:
|
14 |
+
- glenn-jocher
|
15 |
+
labels:
|
16 |
+
- dependencies
|
17 |
+
|
18 |
+
- package-ecosystem: github-actions
|
19 |
+
directory: "/.github/workflows"
|
20 |
+
schedule:
|
21 |
+
interval: weekly
|
22 |
+
time: "04:00"
|
23 |
+
open-pull-requests-limit: 5
|
24 |
+
reviewers:
|
25 |
+
- glenn-jocher
|
26 |
+
labels:
|
27 |
+
- dependencies
|
.github/workflows/ci.yaml
ADDED
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# YOLO Continuous Integration (CI) GitHub Actions tests
|
3 |
+
|
4 |
+
name: Ultralytics CI
|
5 |
+
|
6 |
+
on:
|
7 |
+
push:
|
8 |
+
branches: [main]
|
9 |
+
pull_request:
|
10 |
+
branches: [main]
|
11 |
+
schedule:
|
12 |
+
- cron: "0 8 * * *" # runs at 08:00 UTC every day
|
13 |
+
workflow_dispatch:
|
14 |
+
inputs:
|
15 |
+
hub:
|
16 |
+
description: "Run HUB"
|
17 |
+
default: false
|
18 |
+
type: boolean
|
19 |
+
benchmarks:
|
20 |
+
description: "Run Benchmarks"
|
21 |
+
default: false
|
22 |
+
type: boolean
|
23 |
+
tests:
|
24 |
+
description: "Run Tests"
|
25 |
+
default: false
|
26 |
+
type: boolean
|
27 |
+
gpu:
|
28 |
+
description: "Run GPU"
|
29 |
+
default: false
|
30 |
+
type: boolean
|
31 |
+
raspberrypi:
|
32 |
+
description: "Run Raspberry Pi"
|
33 |
+
default: false
|
34 |
+
type: boolean
|
35 |
+
conda:
|
36 |
+
description: "Run Conda"
|
37 |
+
default: false
|
38 |
+
type: boolean
|
39 |
+
|
40 |
+
jobs:
|
41 |
+
HUB:
|
42 |
+
if: github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push' || (github.event_name == 'workflow_dispatch' && github.event.inputs.hub == 'true'))
|
43 |
+
runs-on: ${{ matrix.os }}
|
44 |
+
strategy:
|
45 |
+
fail-fast: false
|
46 |
+
matrix:
|
47 |
+
os: [ubuntu-latest]
|
48 |
+
python-version: ["3.11"]
|
49 |
+
steps:
|
50 |
+
- uses: actions/checkout@v4
|
51 |
+
- uses: actions/setup-python@v5
|
52 |
+
with:
|
53 |
+
python-version: ${{ matrix.python-version }}
|
54 |
+
cache: "pip" # caching pip dependencies
|
55 |
+
- name: Install requirements
|
56 |
+
shell: bash # for Windows compatibility
|
57 |
+
run: |
|
58 |
+
python -m pip install --upgrade pip wheel
|
59 |
+
pip install . --extra-index-url https://download.pytorch.org/whl/cpu
|
60 |
+
- name: Check environment
|
61 |
+
run: |
|
62 |
+
yolo checks
|
63 |
+
pip list
|
64 |
+
- name: Test HUB training
|
65 |
+
shell: python
|
66 |
+
env:
|
67 |
+
API_KEY: ${{ secrets.ULTRALYTICS_HUB_API_KEY }}
|
68 |
+
MODEL_ID: ${{ secrets.ULTRALYTICS_HUB_MODEL_ID }}
|
69 |
+
run: |
|
70 |
+
import os
|
71 |
+
from ultralytics import YOLO, hub
|
72 |
+
api_key, model_id = os.environ['API_KEY'], os.environ['MODEL_ID']
|
73 |
+
hub.login(api_key)
|
74 |
+
hub.reset_model(model_id)
|
75 |
+
model = YOLO('https://hub.ultralytics.com/models/' + model_id)
|
76 |
+
model.train()
|
77 |
+
- name: Test HUB inference API
|
78 |
+
shell: python
|
79 |
+
env:
|
80 |
+
API_KEY: ${{ secrets.ULTRALYTICS_HUB_API_KEY }}
|
81 |
+
MODEL_ID: ${{ secrets.ULTRALYTICS_HUB_MODEL_ID }}
|
82 |
+
run: |
|
83 |
+
import os
|
84 |
+
import requests
|
85 |
+
import json
|
86 |
+
api_key, model_id = os.environ['API_KEY'], os.environ['MODEL_ID']
|
87 |
+
url = f"https://api.ultralytics.com/v1/predict/{model_id}"
|
88 |
+
headers = {"x-api-key": api_key}
|
89 |
+
data = {"size": 320, "confidence": 0.25, "iou": 0.45}
|
90 |
+
with open("ultralytics/assets/zidane.jpg", "rb") as f:
|
91 |
+
response = requests.post(url, headers=headers, data=data, files={"image": f})
|
92 |
+
assert response.status_code == 200, f'Status code {response.status_code}, Reason {response.reason}'
|
93 |
+
print(json.dumps(response.json(), indent=2))
|
94 |
+
|
95 |
+
Benchmarks:
|
96 |
+
if: github.event_name != 'workflow_dispatch' || github.event.inputs.benchmarks == 'true'
|
97 |
+
runs-on: ${{ matrix.os }}
|
98 |
+
strategy:
|
99 |
+
fail-fast: false
|
100 |
+
matrix:
|
101 |
+
os: [ubuntu-latest, windows-latest, macos-14]
|
102 |
+
python-version: ["3.11"]
|
103 |
+
model: [yolo11n]
|
104 |
+
steps:
|
105 |
+
- uses: actions/checkout@v4
|
106 |
+
- uses: actions/setup-python@v5
|
107 |
+
with:
|
108 |
+
python-version: ${{ matrix.python-version }}
|
109 |
+
cache: "pip" # caching pip dependencies
|
110 |
+
- name: Install requirements
|
111 |
+
shell: bash # for Windows compatibility
|
112 |
+
run: |
|
113 |
+
python -m pip install --upgrade pip wheel
|
114 |
+
pip install -e ".[export]" "coverage[toml]" --extra-index-url https://download.pytorch.org/whl/cpu
|
115 |
+
- name: Check environment
|
116 |
+
run: |
|
117 |
+
yolo checks
|
118 |
+
pip list
|
119 |
+
- name: Benchmark DetectionModel
|
120 |
+
shell: bash
|
121 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}.pt' imgsz=160 verbose=0.309
|
122 |
+
- name: Benchmark ClassificationModel
|
123 |
+
shell: bash
|
124 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-cls.pt' imgsz=160 verbose=0.249
|
125 |
+
- name: Benchmark YOLOWorld DetectionModel
|
126 |
+
shell: bash
|
127 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/yolov8s-worldv2.pt' imgsz=160 verbose=0.337
|
128 |
+
- name: Benchmark SegmentationModel
|
129 |
+
shell: bash
|
130 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-seg.pt' imgsz=160 verbose=0.195
|
131 |
+
- name: Benchmark PoseModel
|
132 |
+
shell: bash
|
133 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-pose.pt' imgsz=160 verbose=0.197
|
134 |
+
- name: Benchmark OBBModel
|
135 |
+
shell: bash
|
136 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-obb.pt' imgsz=160 verbose=0.597
|
137 |
+
- name: Benchmark YOLOv10Model
|
138 |
+
shell: bash
|
139 |
+
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/yolov10n.pt' imgsz=160 verbose=0.205
|
140 |
+
- name: Merge Coverage Reports
|
141 |
+
run: |
|
142 |
+
coverage xml -o coverage-benchmarks.xml
|
143 |
+
- name: Upload Coverage Reports to CodeCov
|
144 |
+
if: github.repository == 'ultralytics/ultralytics'
|
145 |
+
uses: codecov/codecov-action@v4
|
146 |
+
with:
|
147 |
+
flags: Benchmarks
|
148 |
+
env:
|
149 |
+
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
|
150 |
+
- name: Benchmark Summary
|
151 |
+
run: |
|
152 |
+
cat benchmarks.log
|
153 |
+
echo "$(cat benchmarks.log)" >> $GITHUB_STEP_SUMMARY
|
154 |
+
|
155 |
+
Tests:
|
156 |
+
if: github.event_name != 'workflow_dispatch' || github.event.inputs.tests == 'true'
|
157 |
+
timeout-minutes: 360
|
158 |
+
runs-on: ${{ matrix.os }}
|
159 |
+
strategy:
|
160 |
+
fail-fast: false
|
161 |
+
matrix:
|
162 |
+
os: [ubuntu-latest, macos-14, windows-latest]
|
163 |
+
python-version: ["3.11"]
|
164 |
+
torch: [latest]
|
165 |
+
include:
|
166 |
+
- os: ubuntu-latest
|
167 |
+
python-version: "3.8" # torch 1.8.0 requires python >=3.6, <=3.8
|
168 |
+
torch: "1.8.0" # min torch version CI https://pypi.org/project/torchvision/
|
169 |
+
steps:
|
170 |
+
- uses: actions/checkout@v4
|
171 |
+
- uses: actions/setup-python@v5
|
172 |
+
with:
|
173 |
+
python-version: ${{ matrix.python-version }}
|
174 |
+
cache: "pip" # caching pip dependencies
|
175 |
+
- name: Install requirements
|
176 |
+
shell: bash # for Windows compatibility
|
177 |
+
run: |
|
178 |
+
# CoreML must be installed before export due to protobuf error from AutoInstall
|
179 |
+
python -m pip install --upgrade pip wheel
|
180 |
+
slow=""
|
181 |
+
torch=""
|
182 |
+
if [ "${{ matrix.torch }}" == "1.8.0" ]; then
|
183 |
+
torch="torch==1.8.0 torchvision==0.9.0"
|
184 |
+
fi
|
185 |
+
if [[ "${{ github.event_name }}" =~ ^(schedule|workflow_dispatch)$ ]]; then
|
186 |
+
slow="pycocotools mlflow ray[tune]"
|
187 |
+
fi
|
188 |
+
pip install -e ".[export]" $torch $slow pytest-cov --extra-index-url https://download.pytorch.org/whl/cpu
|
189 |
+
- name: Check environment
|
190 |
+
run: |
|
191 |
+
yolo checks
|
192 |
+
pip list
|
193 |
+
- name: Pytest tests
|
194 |
+
shell: bash # for Windows compatibility
|
195 |
+
run: |
|
196 |
+
slow=""
|
197 |
+
if [[ "${{ github.event_name }}" =~ ^(schedule|workflow_dispatch)$ ]]; then
|
198 |
+
slow="--slow"
|
199 |
+
fi
|
200 |
+
pytest $slow --cov=ultralytics/ --cov-report xml tests/
|
201 |
+
- name: Upload Coverage Reports to CodeCov
|
202 |
+
if: github.repository == 'ultralytics/ultralytics' # && matrix.os == 'ubuntu-latest' && matrix.python-version == '3.11'
|
203 |
+
uses: codecov/codecov-action@v4
|
204 |
+
with:
|
205 |
+
flags: Tests
|
206 |
+
env:
|
207 |
+
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
|
208 |
+
|
209 |
+
GPU:
|
210 |
+
if: github.repository == 'ultralytics/ultralytics' && (github.event_name != 'workflow_dispatch' || github.event.inputs.gpu == 'true')
|
211 |
+
timeout-minutes: 360
|
212 |
+
runs-on: gpu-latest
|
213 |
+
steps:
|
214 |
+
- uses: actions/checkout@v4
|
215 |
+
- name: Install requirements
|
216 |
+
run: pip install . pytest-cov
|
217 |
+
- name: Check environment
|
218 |
+
run: |
|
219 |
+
yolo checks
|
220 |
+
pip list
|
221 |
+
- name: Pytest tests
|
222 |
+
run: |
|
223 |
+
slow=""
|
224 |
+
if [[ "${{ github.event_name }}" =~ ^(schedule|workflow_dispatch)$ ]]; then
|
225 |
+
slow="--slow"
|
226 |
+
fi
|
227 |
+
pytest $slow --cov=ultralytics/ --cov-report xml tests/test_cuda.py
|
228 |
+
- name: Upload Coverage Reports to CodeCov
|
229 |
+
uses: codecov/codecov-action@v4
|
230 |
+
with:
|
231 |
+
flags: GPU
|
232 |
+
env:
|
233 |
+
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
|
234 |
+
|
235 |
+
RaspberryPi:
|
236 |
+
if: github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event.inputs.raspberrypi == 'true')
|
237 |
+
timeout-minutes: 120
|
238 |
+
runs-on: raspberry-pi
|
239 |
+
steps:
|
240 |
+
- uses: actions/checkout@v4
|
241 |
+
- name: Activate Virtual Environment
|
242 |
+
run: |
|
243 |
+
python3.11 -m venv env
|
244 |
+
source env/bin/activate
|
245 |
+
echo PATH=$PATH >> $GITHUB_ENV
|
246 |
+
- name: Install requirements
|
247 |
+
run: |
|
248 |
+
python -m pip install --upgrade pip wheel
|
249 |
+
pip install -e ".[export]" pytest mlflow pycocotools "ray[tune]"
|
250 |
+
- name: Check environment
|
251 |
+
run: |
|
252 |
+
yolo checks
|
253 |
+
pip list
|
254 |
+
- name: Pytest tests
|
255 |
+
run: pytest --slow tests/
|
256 |
+
- name: Benchmark ClassificationModel
|
257 |
+
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-cls.pt' imgsz=160 verbose=0.249
|
258 |
+
- name: Benchmark YOLOWorld DetectionModel
|
259 |
+
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8s-worldv2.pt' imgsz=160 verbose=0.337
|
260 |
+
- name: Benchmark SegmentationModel
|
261 |
+
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-seg.pt' imgsz=160 verbose=0.195
|
262 |
+
- name: Benchmark PoseModel
|
263 |
+
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-pose.pt' imgsz=160 verbose=0.197
|
264 |
+
- name: Benchmark OBBModel
|
265 |
+
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-obb.pt' imgsz=160 verbose=0.597
|
266 |
+
- name: Benchmark YOLOv10Model
|
267 |
+
run: python -m ultralytics.cfg.__init__ benchmark model='yolov10n.pt' imgsz=160 verbose=0.205
|
268 |
+
- name: Benchmark Summary
|
269 |
+
run: |
|
270 |
+
cat benchmarks.log
|
271 |
+
echo "$(cat benchmarks.log)" >> $GITHUB_STEP_SUMMARY
|
272 |
+
# The below is fixed in: https://github.com/ultralytics/ultralytics/pull/15987
|
273 |
+
# - name: Reboot # run a reboot command in the background to free resources for next run and not crash main thread
|
274 |
+
# run: sudo bash -c "sleep 10; reboot" &
|
275 |
+
|
276 |
+
Conda:
|
277 |
+
if: github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event.inputs.conda == 'true')
|
278 |
+
continue-on-error: true
|
279 |
+
runs-on: ${{ matrix.os }}
|
280 |
+
strategy:
|
281 |
+
fail-fast: false
|
282 |
+
matrix:
|
283 |
+
os: [ubuntu-latest]
|
284 |
+
python-version: ["3.11"]
|
285 |
+
defaults:
|
286 |
+
run:
|
287 |
+
shell: bash -el {0}
|
288 |
+
steps:
|
289 |
+
- uses: conda-incubator/setup-miniconda@v3
|
290 |
+
with:
|
291 |
+
python-version: ${{ matrix.python-version }}
|
292 |
+
mamba-version: "*"
|
293 |
+
channels: conda-forge,defaults
|
294 |
+
channel-priority: true
|
295 |
+
activate-environment: anaconda-client-env
|
296 |
+
- name: Cleanup toolcache
|
297 |
+
run: |
|
298 |
+
echo "Free space before deletion:"
|
299 |
+
df -h /
|
300 |
+
rm -rf /opt/hostedtoolcache
|
301 |
+
echo "Free space after deletion:"
|
302 |
+
df -h /
|
303 |
+
- name: Install Linux packages
|
304 |
+
run: |
|
305 |
+
# Fix cv2 ImportError: 'libEGL.so.1: cannot open shared object file: No such file or directory'
|
306 |
+
sudo apt-get update
|
307 |
+
sudo apt-get install -y libegl1 libopengl0
|
308 |
+
- name: Install Libmamba
|
309 |
+
run: |
|
310 |
+
conda config --set solver libmamba
|
311 |
+
- name: Install Ultralytics package from conda-forge
|
312 |
+
run: |
|
313 |
+
conda install -c pytorch -c conda-forge pytorch torchvision ultralytics openvino
|
314 |
+
- name: Install pip packages
|
315 |
+
run: |
|
316 |
+
# CoreML must be installed before export due to protobuf error from AutoInstall
|
317 |
+
pip install pytest "coremltools>=7.0; platform_system != 'Windows' and python_version <= '3.11'"
|
318 |
+
- name: Check environment
|
319 |
+
run: |
|
320 |
+
conda list
|
321 |
+
- name: Test CLI
|
322 |
+
run: |
|
323 |
+
yolo predict model=yolo11n.pt imgsz=320
|
324 |
+
yolo train model=yolo11n.pt data=coco8.yaml epochs=1 imgsz=32
|
325 |
+
yolo val model=yolo11n.pt data=coco8.yaml imgsz=32
|
326 |
+
yolo export model=yolo11n.pt format=torchscript imgsz=160
|
327 |
+
- name: Test Python
|
328 |
+
# Note this step must use the updated default bash environment, not a python environment
|
329 |
+
run: |
|
330 |
+
python -c "
|
331 |
+
from ultralytics import YOLO
|
332 |
+
model = YOLO('yolo11n.pt')
|
333 |
+
results = model.train(data='coco8.yaml', epochs=3, imgsz=160)
|
334 |
+
results = model.val(imgsz=160)
|
335 |
+
results = model.predict(imgsz=160)
|
336 |
+
results = model.export(format='onnx', imgsz=160)
|
337 |
+
"
|
338 |
+
- name: PyTest
|
339 |
+
run: |
|
340 |
+
VERSION=$(conda list ultralytics | grep ultralytics | awk '{print $2}')
|
341 |
+
echo "Ultralytics version: $VERSION"
|
342 |
+
git clone https://github.com/ultralytics/ultralytics.git
|
343 |
+
cd ultralytics
|
344 |
+
git checkout tags/v$VERSION
|
345 |
+
pytest tests
|
346 |
+
|
347 |
+
Summary:
|
348 |
+
runs-on: ubuntu-latest
|
349 |
+
needs: [HUB, Benchmarks, Tests, GPU, RaspberryPi, Conda] # Add job names that you want to check for failure
|
350 |
+
if: always() # This ensures the job runs even if previous jobs fail
|
351 |
+
steps:
|
352 |
+
- name: Check for failure and notify
|
353 |
+
if: (needs.HUB.result == 'failure' || needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure' || needs.GPU.result == 'failure' || needs.RaspberryPi.result == 'failure' || needs.Conda.result == 'failure' ) && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push')
|
354 |
+
uses: slackapi/[email protected]
|
355 |
+
with:
|
356 |
+
payload: |
|
357 |
+
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"}
|
358 |
+
env:
|
359 |
+
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
|
.github/workflows/cla.yml
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Ultralytics Contributor License Agreement (CLA) action https://docs.ultralytics.com/help/CLA
|
3 |
+
# This workflow automatically requests Pull Requests (PR) authors to sign the Ultralytics CLA before PRs can be merged
|
4 |
+
|
5 |
+
name: CLA Assistant
|
6 |
+
on:
|
7 |
+
issue_comment:
|
8 |
+
types:
|
9 |
+
- created
|
10 |
+
pull_request_target:
|
11 |
+
types:
|
12 |
+
- reopened
|
13 |
+
- opened
|
14 |
+
- synchronize
|
15 |
+
|
16 |
+
permissions:
|
17 |
+
actions: write
|
18 |
+
contents: write
|
19 |
+
pull-requests: write
|
20 |
+
statuses: write
|
21 |
+
|
22 |
+
jobs:
|
23 |
+
CLA:
|
24 |
+
if: github.repository == 'ultralytics/ultralytics'
|
25 |
+
runs-on: ubuntu-latest
|
26 |
+
steps:
|
27 |
+
- name: CLA Assistant
|
28 |
+
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I sign the CLA') || github.event_name == 'pull_request_target'
|
29 |
+
uses: contributor-assistant/[email protected]
|
30 |
+
env:
|
31 |
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
32 |
+
# Must be repository secret PAT
|
33 |
+
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
|
34 |
+
with:
|
35 |
+
path-to-signatures: "signatures/version1/cla.json"
|
36 |
+
path-to-document: "https://docs.ultralytics.com/help/CLA" # CLA document
|
37 |
+
# Branch must not be protected
|
38 |
+
branch: cla-signatures
|
39 |
+
allowlist: dependabot[bot],github-actions,[pre-commit*,pre-commit*,bot*
|
40 |
+
|
41 |
+
remote-organization-name: ultralytics
|
42 |
+
remote-repository-name: cla
|
43 |
+
custom-pr-sign-comment: "I have read the CLA Document and I sign the CLA"
|
44 |
+
custom-allsigned-prcomment: All Contributors have signed the CLA. ✅
|
.github/workflows/codeql.yaml
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
|
3 |
+
name: "CodeQL"
|
4 |
+
|
5 |
+
on:
|
6 |
+
schedule:
|
7 |
+
- cron: "0 0 1 * *"
|
8 |
+
workflow_dispatch:
|
9 |
+
|
10 |
+
jobs:
|
11 |
+
analyze:
|
12 |
+
name: Analyze
|
13 |
+
runs-on: ${{ 'ubuntu-latest' }}
|
14 |
+
permissions:
|
15 |
+
actions: read
|
16 |
+
contents: read
|
17 |
+
security-events: write
|
18 |
+
|
19 |
+
strategy:
|
20 |
+
fail-fast: false
|
21 |
+
matrix:
|
22 |
+
language: ["python"]
|
23 |
+
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
|
24 |
+
|
25 |
+
steps:
|
26 |
+
- name: Checkout repository
|
27 |
+
uses: actions/checkout@v4
|
28 |
+
|
29 |
+
# Initializes the CodeQL tools for scanning.
|
30 |
+
- name: Initialize CodeQL
|
31 |
+
uses: github/codeql-action/init@v3
|
32 |
+
with:
|
33 |
+
languages: ${{ matrix.language }}
|
34 |
+
# If you wish to specify custom queries, you can do so here or in a config file.
|
35 |
+
# By default, queries listed here will override any specified in a config file.
|
36 |
+
# Prefix the list here with "+" to use these queries and those in the config file.
|
37 |
+
# queries: security-extended,security-and-quality
|
38 |
+
|
39 |
+
- name: Perform CodeQL Analysis
|
40 |
+
uses: github/codeql-action/analyze@v3
|
41 |
+
with:
|
42 |
+
category: "/language:${{matrix.language}}"
|
.github/workflows/docker.yaml
ADDED
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:latest images on DockerHub https://hub.docker.com/r/ultralytics
|
3 |
+
|
4 |
+
name: Publish Docker Images
|
5 |
+
|
6 |
+
on:
|
7 |
+
push:
|
8 |
+
branches: [main]
|
9 |
+
paths-ignore:
|
10 |
+
- "docs/**"
|
11 |
+
- "mkdocs.yml"
|
12 |
+
workflow_dispatch:
|
13 |
+
inputs:
|
14 |
+
Dockerfile:
|
15 |
+
type: boolean
|
16 |
+
description: Use Dockerfile
|
17 |
+
default: true
|
18 |
+
Dockerfile-cpu:
|
19 |
+
type: boolean
|
20 |
+
description: Use Dockerfile-cpu
|
21 |
+
default: true
|
22 |
+
Dockerfile-arm64:
|
23 |
+
type: boolean
|
24 |
+
description: Use Dockerfile-arm64
|
25 |
+
default: true
|
26 |
+
Dockerfile-jetson-jetpack6:
|
27 |
+
type: boolean
|
28 |
+
description: Use Dockerfile-jetson-jetpack6
|
29 |
+
default: true
|
30 |
+
Dockerfile-jetson-jetpack5:
|
31 |
+
type: boolean
|
32 |
+
description: Use Dockerfile-jetson-jetpack5
|
33 |
+
default: true
|
34 |
+
Dockerfile-jetson-jetpack4:
|
35 |
+
type: boolean
|
36 |
+
description: Use Dockerfile-jetson-jetpack4
|
37 |
+
default: true
|
38 |
+
Dockerfile-python:
|
39 |
+
type: boolean
|
40 |
+
description: Use Dockerfile-python
|
41 |
+
default: true
|
42 |
+
Dockerfile-conda:
|
43 |
+
type: boolean
|
44 |
+
description: Use Dockerfile-conda
|
45 |
+
default: true
|
46 |
+
push:
|
47 |
+
type: boolean
|
48 |
+
description: Publish all Images to Docker Hub
|
49 |
+
|
50 |
+
jobs:
|
51 |
+
docker:
|
52 |
+
if: github.repository == 'ultralytics/ultralytics'
|
53 |
+
name: Push
|
54 |
+
runs-on: ubuntu-latest
|
55 |
+
strategy:
|
56 |
+
fail-fast: false
|
57 |
+
max-parallel: 10
|
58 |
+
matrix:
|
59 |
+
include:
|
60 |
+
- dockerfile: "Dockerfile"
|
61 |
+
tags: "latest"
|
62 |
+
platforms: "linux/amd64"
|
63 |
+
- dockerfile: "Dockerfile-cpu"
|
64 |
+
tags: "latest-cpu"
|
65 |
+
platforms: "linux/amd64"
|
66 |
+
- dockerfile: "Dockerfile-arm64"
|
67 |
+
tags: "latest-arm64"
|
68 |
+
platforms: "linux/arm64"
|
69 |
+
- dockerfile: "Dockerfile-jetson-jetpack6"
|
70 |
+
tags: "latest-jetson-jetpack6"
|
71 |
+
platforms: "linux/arm64"
|
72 |
+
- dockerfile: "Dockerfile-jetson-jetpack5"
|
73 |
+
tags: "latest-jetson-jetpack5"
|
74 |
+
platforms: "linux/arm64"
|
75 |
+
- dockerfile: "Dockerfile-jetson-jetpack4"
|
76 |
+
tags: "latest-jetson-jetpack4"
|
77 |
+
platforms: "linux/arm64"
|
78 |
+
- dockerfile: "Dockerfile-python"
|
79 |
+
tags: "latest-python"
|
80 |
+
platforms: "linux/amd64"
|
81 |
+
# - dockerfile: "Dockerfile-conda"
|
82 |
+
# tags: "latest-conda"
|
83 |
+
# platforms: "linux/amd64"
|
84 |
+
outputs:
|
85 |
+
new_release: ${{ steps.check_tag.outputs.new_release }}
|
86 |
+
steps:
|
87 |
+
- name: Cleanup disk
|
88 |
+
# Free up to 30GB of disk space per https://github.com/ultralytics/ultralytics/pull/15848
|
89 |
+
uses: jlumbroso/[email protected]
|
90 |
+
with:
|
91 |
+
tool-cache: true
|
92 |
+
|
93 |
+
- name: Checkout repo
|
94 |
+
uses: actions/checkout@v4
|
95 |
+
with:
|
96 |
+
fetch-depth: 0 # copy full .git directory to access full git history in Docker images
|
97 |
+
|
98 |
+
- name: Set up QEMU
|
99 |
+
uses: docker/setup-qemu-action@v3
|
100 |
+
|
101 |
+
- name: Set up Docker Buildx
|
102 |
+
uses: docker/setup-buildx-action@v3
|
103 |
+
|
104 |
+
- name: Login to Docker Hub
|
105 |
+
uses: docker/login-action@v3
|
106 |
+
with:
|
107 |
+
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
108 |
+
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
109 |
+
|
110 |
+
- name: Retrieve Ultralytics version
|
111 |
+
id: get_version
|
112 |
+
run: |
|
113 |
+
VERSION=$(grep "^__version__ =" ultralytics/__init__.py | awk -F'"' '{print $2}')
|
114 |
+
echo "Retrieved Ultralytics version: $VERSION"
|
115 |
+
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
116 |
+
VERSION_TAG=$(echo "${{ matrix.tags }}" | sed "s/latest/${VERSION}/")
|
117 |
+
echo "Intended version tag: $VERSION_TAG"
|
118 |
+
echo "version_tag=$VERSION_TAG" >> $GITHUB_OUTPUT
|
119 |
+
|
120 |
+
- name: Check if version tag exists on DockerHub
|
121 |
+
id: check_tag
|
122 |
+
run: |
|
123 |
+
RESPONSE=$(curl -s https://hub.docker.com/v2/repositories/ultralytics/ultralytics/tags/$VERSION_TAG)
|
124 |
+
MESSAGE=$(echo $RESPONSE | jq -r '.message')
|
125 |
+
if [[ "$MESSAGE" == "null" ]]; then
|
126 |
+
echo "Tag $VERSION_TAG already exists on DockerHub."
|
127 |
+
echo "new_release=false" >> $GITHUB_OUTPUT
|
128 |
+
elif [[ "$MESSAGE" == *"404"* ]]; then
|
129 |
+
echo "Tag $VERSION_TAG does not exist on DockerHub."
|
130 |
+
echo "new_release=true" >> $GITHUB_OUTPUT
|
131 |
+
else
|
132 |
+
echo "Unexpected response from DockerHub. Please check manually."
|
133 |
+
echo "new_release=false" >> $GITHUB_OUTPUT
|
134 |
+
fi
|
135 |
+
env:
|
136 |
+
VERSION_TAG: ${{ steps.get_version.outputs.version_tag }}
|
137 |
+
|
138 |
+
- name: Build Image
|
139 |
+
if: github.event_name == 'push' || github.event.inputs[matrix.dockerfile] == 'true'
|
140 |
+
uses: nick-invision/retry@v3
|
141 |
+
with:
|
142 |
+
timeout_minutes: 120
|
143 |
+
retry_wait_seconds: 60
|
144 |
+
max_attempts: 3 # retry twice
|
145 |
+
command: |
|
146 |
+
docker build \
|
147 |
+
--platform ${{ matrix.platforms }} \
|
148 |
+
-f docker/${{ matrix.dockerfile }} \
|
149 |
+
-t ultralytics/ultralytics:${{ matrix.tags }} \
|
150 |
+
-t ultralytics/ultralytics:${{ steps.get_version.outputs.version_tag }} \
|
151 |
+
.
|
152 |
+
|
153 |
+
- name: Run Tests
|
154 |
+
if: (github.event_name == 'push' || github.event.inputs[matrix.dockerfile] == 'true') && matrix.platforms == 'linux/amd64' && matrix.dockerfile != 'Dockerfile-conda' # arm64 images not supported on GitHub CI runners
|
155 |
+
run: docker run ultralytics/ultralytics:${{ matrix.tags }} /bin/bash -c "pip install pytest && pytest tests"
|
156 |
+
|
157 |
+
- name: Run Benchmarks
|
158 |
+
# WARNING: Dockerfile (GPU) error on TF.js export 'module 'numpy' has no attribute 'object'.
|
159 |
+
if: (github.event_name == 'push' || github.event.inputs[matrix.dockerfile] == 'true') && matrix.platforms == 'linux/amd64' && matrix.dockerfile != 'Dockerfile' && matrix.dockerfile != 'Dockerfile-conda' # arm64 images not supported on GitHub CI runners
|
160 |
+
run: docker run ultralytics/ultralytics:${{ matrix.tags }} yolo benchmark model=yolo11n.pt imgsz=160 verbose=0.309
|
161 |
+
|
162 |
+
- name: Push Docker Image with Ultralytics version tag
|
163 |
+
if: (github.event_name == 'push' || (github.event.inputs[matrix.dockerfile] == 'true' && github.event.inputs.push == 'true')) && steps.check_tag.outputs.new_release == 'true' && matrix.dockerfile != 'Dockerfile-conda'
|
164 |
+
run: |
|
165 |
+
docker push ultralytics/ultralytics:${{ steps.get_version.outputs.version_tag }}
|
166 |
+
|
167 |
+
- name: Push Docker Image with latest tag
|
168 |
+
if: github.event_name == 'push' || (github.event.inputs[matrix.dockerfile] == 'true' && github.event.inputs.push == 'true')
|
169 |
+
run: |
|
170 |
+
docker push ultralytics/ultralytics:${{ matrix.tags }}
|
171 |
+
if [[ "${{ matrix.tags }}" == "latest" ]]; then
|
172 |
+
t=ultralytics/ultralytics:latest-runner
|
173 |
+
docker build -f docker/Dockerfile-runner -t $t .
|
174 |
+
docker push $t
|
175 |
+
fi
|
176 |
+
|
177 |
+
trigger-actions:
|
178 |
+
runs-on: ubuntu-latest
|
179 |
+
needs: docker
|
180 |
+
# Only trigger actions on new Ultralytics releases
|
181 |
+
if: success() && github.repository == 'ultralytics/ultralytics' && github.event_name == 'push' && needs.docker.outputs.new_release == 'true'
|
182 |
+
steps:
|
183 |
+
- name: Trigger Additional GitHub Actions
|
184 |
+
env:
|
185 |
+
GH_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
|
186 |
+
run: |
|
187 |
+
gh workflow run deploy_cloud_run.yml \
|
188 |
+
--repo ultralytics/assistant \
|
189 |
+
--ref main
|
190 |
+
|
191 |
+
notify:
|
192 |
+
runs-on: ubuntu-latest
|
193 |
+
needs: [docker, trigger-actions]
|
194 |
+
if: always()
|
195 |
+
steps:
|
196 |
+
- name: Check for failure and notify
|
197 |
+
if: needs.docker.result == 'failure' && github.repository == 'ultralytics/ultralytics' && github.event_name == 'push'
|
198 |
+
uses: slackapi/[email protected]
|
199 |
+
with:
|
200 |
+
payload: |
|
201 |
+
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"}
|
202 |
+
env:
|
203 |
+
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
|
.github/workflows/docs.yml
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Test and publish docs to https://docs.ultralytics.com
|
3 |
+
# Ignores the following Docs rules to match Google-style docstrings:
|
4 |
+
# D100: Missing docstring in public module
|
5 |
+
# D104: Missing docstring in public package
|
6 |
+
# D203: 1 blank line required before class docstring
|
7 |
+
# D205: 1 blank line required between summary line and description
|
8 |
+
# D212: Multi-line docstring summary should start at the first line
|
9 |
+
# D213: Multi-line docstring summary should start at the second line
|
10 |
+
# D401: First line of docstring should be in imperative mood
|
11 |
+
# D406: Section name should end with a newline
|
12 |
+
# D407: Missing dashed underline after section
|
13 |
+
# D413: Missing blank line after last section
|
14 |
+
|
15 |
+
name: Publish Docs
|
16 |
+
|
17 |
+
on:
|
18 |
+
push:
|
19 |
+
branches: [main]
|
20 |
+
pull_request:
|
21 |
+
branches: [main]
|
22 |
+
workflow_dispatch:
|
23 |
+
|
24 |
+
jobs:
|
25 |
+
Docs:
|
26 |
+
if: github.repository == 'ultralytics/ultralytics'
|
27 |
+
runs-on: macos-14
|
28 |
+
steps:
|
29 |
+
- name: Git config
|
30 |
+
run: |
|
31 |
+
git config --global user.name "UltralyticsAssistant"
|
32 |
+
git config --global user.email "[email protected]"
|
33 |
+
- name: Checkout Repository
|
34 |
+
uses: actions/checkout@v4
|
35 |
+
with:
|
36 |
+
repository: ${{ github.event.pull_request.head.repo.full_name || github.repository }}
|
37 |
+
token: ${{ secrets.PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }}
|
38 |
+
ref: ${{ github.head_ref || github.ref }}
|
39 |
+
fetch-depth: 0
|
40 |
+
- name: Set up Python
|
41 |
+
uses: actions/setup-python@v5
|
42 |
+
with:
|
43 |
+
python-version: "3.x"
|
44 |
+
cache: "pip" # caching pip dependencies
|
45 |
+
- name: Install Dependencies
|
46 |
+
run: pip install ruff black tqdm mkdocs-material "mkdocstrings[python]" mkdocs-jupyter mkdocs-redirects mkdocs-ultralytics-plugin mkdocs-macros-plugin
|
47 |
+
- name: Ruff fixes
|
48 |
+
continue-on-error: true
|
49 |
+
run: ruff check --fix --unsafe-fixes --select D --ignore=D100,D104,D203,D205,D212,D213,D401,D406,D407,D413 .
|
50 |
+
- name: Update Docs Reference Section and Push Changes
|
51 |
+
continue-on-error: true
|
52 |
+
run: |
|
53 |
+
python docs/build_reference.py
|
54 |
+
git pull origin ${{ github.head_ref || github.ref }}
|
55 |
+
git add .
|
56 |
+
git reset HEAD -- .github/workflows/ # workflow changes are not permitted with default token
|
57 |
+
if ! git diff --staged --quiet; then
|
58 |
+
git commit -m "Auto-update Ultralytics Docs Reference by https://ultralytics.com/actions"
|
59 |
+
git push
|
60 |
+
else
|
61 |
+
echo "No changes to commit"
|
62 |
+
fi
|
63 |
+
- name: Ruff checks
|
64 |
+
run: ruff check --select D --ignore=D100,D104,D203,D205,D212,D213,D401,D406,D407,D413 .
|
65 |
+
- name: Build Docs and Check for Warnings
|
66 |
+
run: |
|
67 |
+
export JUPYTER_PLATFORM_DIRS=1
|
68 |
+
python docs/build_docs.py
|
69 |
+
- name: Commit and Push Docs changes
|
70 |
+
continue-on-error: true
|
71 |
+
if: always()
|
72 |
+
run: |
|
73 |
+
git pull origin ${{ github.head_ref || github.ref }}
|
74 |
+
git add --update # only add updated files
|
75 |
+
git reset HEAD -- .github/workflows/ # workflow changes are not permitted with default token
|
76 |
+
if ! git diff --staged --quiet; then
|
77 |
+
git commit -m "Auto-update Ultralytics Docs by https://ultralytics.com/actions"
|
78 |
+
git push
|
79 |
+
else
|
80 |
+
echo "No changes to commit"
|
81 |
+
fi
|
82 |
+
- name: Publish Docs to https://docs.ultralytics.com
|
83 |
+
if: github.event_name == 'push'
|
84 |
+
run: |
|
85 |
+
git clone https://github.com/ultralytics/docs.git docs-repo
|
86 |
+
cd docs-repo
|
87 |
+
git checkout gh-pages || git checkout -b gh-pages
|
88 |
+
rm -rf *
|
89 |
+
cp -R ../site/* .
|
90 |
+
echo "${{ secrets.INDEXNOW_KEY_DOCS }}" > "${{ secrets.INDEXNOW_KEY_DOCS }}.txt"
|
91 |
+
git add .
|
92 |
+
if git diff --staged --quiet; then
|
93 |
+
echo "No changes to commit"
|
94 |
+
else
|
95 |
+
LATEST_HASH=$(git rev-parse --short=7 HEAD)
|
96 |
+
git commit -m "Update Docs for 'ultralytics ${{ steps.check_pypi.outputs.version }} - $LATEST_HASH'"
|
97 |
+
git push https://${{ secrets.PERSONAL_ACCESS_TOKEN }}@github.com/ultralytics/docs.git gh-pages
|
98 |
+
fi
|
.github/workflows/format.yml
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics 🚀 - AGPL-3.0 License https://ultralytics.com/license
|
2 |
+
# Ultralytics Actions https://github.com/ultralytics/actions
|
3 |
+
# This workflow automatically formats code and documentation in PRs to official Ultralytics standards
|
4 |
+
|
5 |
+
name: Ultralytics Actions
|
6 |
+
|
7 |
+
on:
|
8 |
+
issues:
|
9 |
+
types: [opened, edited]
|
10 |
+
discussion:
|
11 |
+
types: [created]
|
12 |
+
pull_request_target:
|
13 |
+
branches: [main]
|
14 |
+
types: [opened, closed, synchronize, review_requested]
|
15 |
+
|
16 |
+
jobs:
|
17 |
+
format:
|
18 |
+
runs-on: macos-14
|
19 |
+
steps:
|
20 |
+
- name: Run Ultralytics Formatting
|
21 |
+
uses: ultralytics/actions@main
|
22 |
+
with:
|
23 |
+
token: ${{ secrets.PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }} # note GITHUB_TOKEN automatically generated
|
24 |
+
labels: true # autolabel issues and PRs
|
25 |
+
python: true # format Python code and docstrings
|
26 |
+
prettier: true # format YAML, JSON, Markdown and CSS
|
27 |
+
spelling: true # check spelling
|
28 |
+
links: false # check broken links
|
29 |
+
summary: true # print PR summary with GPT4o (requires 'openai_api_key')
|
30 |
+
openai_azure_api_key: ${{ secrets.OPENAI_AZURE_API_KEY }}
|
31 |
+
openai_azure_endpoint: ${{ secrets.OPENAI_AZURE_ENDPOINT }}
|
32 |
+
first_issue_response: |
|
33 |
+
👋 Hello @${{ github.actor }}, thank you for your interest in Ultralytics 🚀! We recommend a visit to the [Docs](https://docs.ultralytics.com) for new users where you can find many [Python](https://docs.ultralytics.com/usage/python/) and [CLI](https://docs.ultralytics.com/usage/cli/) usage examples and where many of the most common questions may already be answered.
|
34 |
+
|
35 |
+
If this is a 🐛 Bug Report, please provide a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us debug it.
|
36 |
+
|
37 |
+
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our [Tips for Best Training Results](https://docs.ultralytics.com/guides/model-training-tips/).
|
38 |
+
|
39 |
+
Join the Ultralytics community where it suits you best. For real-time chat, head to [Discord](https://ultralytics.com/discord) 🎧. Prefer in-depth discussions? Check out [Discourse](https://community.ultralytics.com). Or dive into threads on our [Subreddit](https://reddit.com/r/ultralytics) to share knowledge with the community.
|
40 |
+
|
41 |
+
## Upgrade
|
42 |
+
|
43 |
+
Upgrade to the latest `ultralytics` package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) in a [**Python>=3.8**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) to verify your issue is not already resolved in the latest version:
|
44 |
+
|
45 |
+
```bash
|
46 |
+
pip install -U ultralytics
|
47 |
+
```
|
48 |
+
|
49 |
+
## Environments
|
50 |
+
|
51 |
+
YOLO may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
|
52 |
+
|
53 |
+
- **Notebooks** with free GPU: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov8"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
54 |
+
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
|
55 |
+
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
|
56 |
+
- **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
|
57 |
+
|
58 |
+
## Status
|
59 |
+
|
60 |
+
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
|
61 |
+
|
62 |
+
If this badge is green, all [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests are currently passing. CI tests verify correct operation of all YOLO [Modes](https://docs.ultralytics.com/modes/) and [Tasks](https://docs.ultralytics.com/tasks/) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
.github/workflows/links.yml
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Continuous Integration (CI) GitHub Actions tests broken link checker using https://github.com/lycheeverse/lychee
|
3 |
+
# Ignores the following status codes to reduce false positives:
|
4 |
+
# - 401(Vimeo, 'unauthorized')
|
5 |
+
# - 403(OpenVINO, 'forbidden')
|
6 |
+
# - 429(Instagram, 'too many requests')
|
7 |
+
# - 500(Zenodo, 'cached')
|
8 |
+
# - 502(Zenodo, 'bad gateway')
|
9 |
+
# - 999(LinkedIn, 'unknown status code')
|
10 |
+
|
11 |
+
name: Check Broken links
|
12 |
+
|
13 |
+
on:
|
14 |
+
workflow_dispatch:
|
15 |
+
schedule:
|
16 |
+
- cron: "0 0 * * *" # runs at 00:00 UTC every day
|
17 |
+
|
18 |
+
jobs:
|
19 |
+
Links:
|
20 |
+
runs-on: ubuntu-latest
|
21 |
+
steps:
|
22 |
+
- uses: actions/checkout@v4
|
23 |
+
|
24 |
+
- name: Download and install lychee
|
25 |
+
run: |
|
26 |
+
LYCHEE_URL=$(curl -s https://api.github.com/repos/lycheeverse/lychee/releases/latest | grep "browser_download_url" | grep "x86_64-unknown-linux-gnu.tar.gz" | cut -d '"' -f 4)
|
27 |
+
curl -L $LYCHEE_URL -o lychee.tar.gz
|
28 |
+
tar xzf lychee.tar.gz
|
29 |
+
sudo mv lychee /usr/local/bin
|
30 |
+
|
31 |
+
- name: Test Markdown and HTML links with retry
|
32 |
+
uses: nick-invision/retry@v3
|
33 |
+
with:
|
34 |
+
timeout_minutes: 5
|
35 |
+
retry_wait_seconds: 60
|
36 |
+
max_attempts: 3
|
37 |
+
command: |
|
38 |
+
lychee \
|
39 |
+
--scheme https \
|
40 |
+
--timeout 60 \
|
41 |
+
--insecure \
|
42 |
+
--accept 401,403,429,500,502,999 \
|
43 |
+
--exclude-all-private \
|
44 |
+
--exclude 'https?://(www\.)?(linkedin\.com|twitter\.com|instagram\.com|kaggle\.com|fonts\.gstatic\.com|url\.com)' \
|
45 |
+
--exclude-path docs/zh \
|
46 |
+
--exclude-path docs/es \
|
47 |
+
--exclude-path docs/ru \
|
48 |
+
--exclude-path docs/pt \
|
49 |
+
--exclude-path docs/fr \
|
50 |
+
--exclude-path docs/de \
|
51 |
+
--exclude-path docs/ja \
|
52 |
+
--exclude-path docs/ko \
|
53 |
+
--exclude-path docs/hi \
|
54 |
+
--exclude-path docs/ar \
|
55 |
+
--github-token ${{ secrets.GITHUB_TOKEN }} \
|
56 |
+
--header "User-Agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.6478.183 Safari/537.36" \
|
57 |
+
'./**/*.md' \
|
58 |
+
'./**/*.html'
|
59 |
+
|
60 |
+
- name: Test Markdown, HTML, YAML, Python and Notebook links with retry
|
61 |
+
if: github.event_name == 'workflow_dispatch'
|
62 |
+
uses: nick-invision/retry@v3
|
63 |
+
with:
|
64 |
+
timeout_minutes: 5
|
65 |
+
retry_wait_seconds: 60
|
66 |
+
max_attempts: 3
|
67 |
+
command: |
|
68 |
+
lychee \
|
69 |
+
--scheme https \
|
70 |
+
--timeout 60 \
|
71 |
+
--insecure \
|
72 |
+
--accept 401,403,429,500,502,999 \
|
73 |
+
--exclude-all-private \
|
74 |
+
--exclude 'https?://(www\.)?(linkedin\.com|twitter\.com|instagram\.com|kaggle\.com|fonts\.gstatic\.com|url\.com)' \
|
75 |
+
--exclude-path '**/ci.yaml' \
|
76 |
+
--exclude-path docs/zh \
|
77 |
+
--exclude-path docs/es \
|
78 |
+
--exclude-path docs/ru \
|
79 |
+
--exclude-path docs/pt \
|
80 |
+
--exclude-path docs/fr \
|
81 |
+
--exclude-path docs/de \
|
82 |
+
--exclude-path docs/ja \
|
83 |
+
--exclude-path docs/ko \
|
84 |
+
--exclude-path docs/hi \
|
85 |
+
--exclude-path docs/ar \
|
86 |
+
--github-token ${{ secrets.GITHUB_TOKEN }} \
|
87 |
+
--header "User-Agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.6478.183 Safari/537.36" \
|
88 |
+
'./**/*.md' \
|
89 |
+
'./**/*.html' \
|
90 |
+
'./**/*.yml' \
|
91 |
+
'./**/*.yaml' \
|
92 |
+
'./**/*.py' \
|
93 |
+
'./**/*.ipynb'
|
.github/workflows/merge-main-into-prs.yml
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Automatically merges repository 'main' branch into all open PRs to keep them up-to-date
|
3 |
+
# Action runs on updates to main branch so when one PR merges to main all others update
|
4 |
+
|
5 |
+
name: Merge main into PRs
|
6 |
+
|
7 |
+
on:
|
8 |
+
workflow_dispatch:
|
9 |
+
# push:
|
10 |
+
# branches:
|
11 |
+
# - ${{ github.event.repository.default_branch }}
|
12 |
+
|
13 |
+
jobs:
|
14 |
+
Merge:
|
15 |
+
if: github.repository == 'ultralytics/ultralytics'
|
16 |
+
runs-on: ubuntu-latest
|
17 |
+
steps:
|
18 |
+
- name: Checkout repository
|
19 |
+
uses: actions/checkout@v4
|
20 |
+
with:
|
21 |
+
fetch-depth: 0
|
22 |
+
- uses: actions/setup-python@v5
|
23 |
+
with:
|
24 |
+
python-version: "3.x"
|
25 |
+
cache: "pip"
|
26 |
+
- name: Install requirements
|
27 |
+
run: |
|
28 |
+
pip install pygithub
|
29 |
+
- name: Merge default branch into PRs
|
30 |
+
shell: python
|
31 |
+
run: |
|
32 |
+
from github import Github
|
33 |
+
import os
|
34 |
+
import time
|
35 |
+
|
36 |
+
g = Github("${{ secrets.PERSONAL_ACCESS_TOKEN }}")
|
37 |
+
repo = g.get_repo("${{ github.repository }}")
|
38 |
+
|
39 |
+
# Fetch the default branch name
|
40 |
+
default_branch_name = repo.default_branch
|
41 |
+
default_branch = repo.get_branch(default_branch_name)
|
42 |
+
|
43 |
+
# Initialize counters
|
44 |
+
updated_branches = 0
|
45 |
+
up_to_date_branches = 0
|
46 |
+
errors = 0
|
47 |
+
|
48 |
+
for pr in repo.get_pulls(state='open', sort='created'):
|
49 |
+
try:
|
50 |
+
# Label PRs as popular for positive reactions
|
51 |
+
reactions = pr.as_issue().get_reactions()
|
52 |
+
if sum([(1 if r.content not in {"-1", "confused"} else 0) for r in reactions]) > 5:
|
53 |
+
pr.set_labels(*("popular",) + tuple(l.name for l in pr.get_labels()))
|
54 |
+
|
55 |
+
# Get full names for repositories and branches
|
56 |
+
base_repo_name = repo.full_name
|
57 |
+
head_repo_name = pr.head.repo.full_name
|
58 |
+
base_branch_name = pr.base.ref
|
59 |
+
head_branch_name = pr.head.ref
|
60 |
+
|
61 |
+
# Check if PR is behind the default branch
|
62 |
+
comparison = repo.compare(default_branch.commit.sha, pr.head.sha)
|
63 |
+
if comparison.behind_by > 0:
|
64 |
+
print(f"⚠️ PR #{pr.number} ({head_repo_name}:{head_branch_name} -> {base_repo_name}:{base_branch_name}) is behind {default_branch_name} by {comparison.behind_by} commit(s).")
|
65 |
+
|
66 |
+
# Attempt to update the branch
|
67 |
+
try:
|
68 |
+
success = pr.update_branch()
|
69 |
+
assert success, "Branch update failed"
|
70 |
+
print(f"✅ Successfully merged '{default_branch_name}' into PR #{pr.number} ({head_repo_name}:{head_branch_name} -> {base_repo_name}:{base_branch_name}).")
|
71 |
+
updated_branches += 1
|
72 |
+
time.sleep(10) # rate limit merges
|
73 |
+
except Exception as update_error:
|
74 |
+
print(f"❌ Could not update PR #{pr.number} ({head_repo_name}:{head_branch_name} -> {base_repo_name}:{base_branch_name}): {update_error}")
|
75 |
+
errors += 1
|
76 |
+
else:
|
77 |
+
print(f"✅ PR #{pr.number} ({head_repo_name}:{head_branch_name} -> {base_repo_name}:{base_branch_name}) is already up to date with {default_branch_name}, no merge required.")
|
78 |
+
up_to_date_branches += 1
|
79 |
+
except Exception as e:
|
80 |
+
print(f"❌ Could not process PR #{pr.number}: {e}")
|
81 |
+
errors += 1
|
82 |
+
|
83 |
+
# Print summary
|
84 |
+
print("\n\nSummary:")
|
85 |
+
print(f"Branches updated: {updated_branches}")
|
86 |
+
print(f"Branches already up-to-date: {up_to_date_branches}")
|
87 |
+
print(f"Total errors: {errors}")
|
.github/workflows/publish.yml
ADDED
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Publish pip package to PyPI https://pypi.org/project/ultralytics/
|
3 |
+
|
4 |
+
name: Publish to PyPI
|
5 |
+
|
6 |
+
on:
|
7 |
+
push:
|
8 |
+
branches: [main]
|
9 |
+
workflow_dispatch:
|
10 |
+
inputs:
|
11 |
+
pypi:
|
12 |
+
type: boolean
|
13 |
+
description: Publish to PyPI
|
14 |
+
|
15 |
+
jobs:
|
16 |
+
publish:
|
17 |
+
if: github.repository == 'ultralytics/ultralytics' && github.actor == 'glenn-jocher'
|
18 |
+
name: Publish
|
19 |
+
runs-on: ubuntu-latest
|
20 |
+
permissions:
|
21 |
+
id-token: write # for PyPI trusted publishing
|
22 |
+
steps:
|
23 |
+
- name: Checkout code
|
24 |
+
uses: actions/checkout@v4
|
25 |
+
with:
|
26 |
+
token: ${{ secrets.PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }} # use your PAT here
|
27 |
+
- name: Git config
|
28 |
+
run: |
|
29 |
+
git config --global user.name "UltralyticsAssistant"
|
30 |
+
git config --global user.email "[email protected]"
|
31 |
+
- name: Set up Python environment
|
32 |
+
uses: actions/setup-python@v5
|
33 |
+
with:
|
34 |
+
python-version: "3.x"
|
35 |
+
cache: "pip" # caching pip dependencies
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
python -m pip install --upgrade pip wheel
|
39 |
+
pip install requests build twine toml
|
40 |
+
- name: Check PyPI version
|
41 |
+
shell: python
|
42 |
+
run: |
|
43 |
+
import os
|
44 |
+
import requests
|
45 |
+
import toml
|
46 |
+
|
47 |
+
# Load version and package name from pyproject.toml
|
48 |
+
pyproject = toml.load('pyproject.toml')
|
49 |
+
package_name = pyproject['project']['name']
|
50 |
+
local_version = pyproject['project'].get('version', 'dynamic')
|
51 |
+
|
52 |
+
# If version is dynamic, extract it from the specified file
|
53 |
+
if local_version == 'dynamic':
|
54 |
+
version_attr = pyproject['tool']['setuptools']['dynamic']['version']['attr']
|
55 |
+
module_path, attr_name = version_attr.rsplit('.', 1)
|
56 |
+
with open(f"{module_path.replace('.', '/')}/__init__.py") as f:
|
57 |
+
local_version = next(line.split('=')[1].strip().strip("'\"") for line in f if line.startswith(attr_name))
|
58 |
+
|
59 |
+
print(f"Local Version: {local_version}")
|
60 |
+
|
61 |
+
# Get online version from PyPI
|
62 |
+
response = requests.get(f"https://pypi.org/pypi/{package_name}/json")
|
63 |
+
online_version = response.json()['info']['version'] if response.status_code == 200 else None
|
64 |
+
print(f"Online Version: {online_version or 'Not Found'}")
|
65 |
+
|
66 |
+
# Determine if a new version should be published
|
67 |
+
publish = False
|
68 |
+
if online_version:
|
69 |
+
local_ver = tuple(map(int, local_version.split('.')))
|
70 |
+
online_ver = tuple(map(int, online_version.split('.')))
|
71 |
+
major_diff = local_ver[0] - online_ver[0]
|
72 |
+
minor_diff = local_ver[1] - online_ver[1]
|
73 |
+
patch_diff = local_ver[2] - online_ver[2]
|
74 |
+
|
75 |
+
publish = (
|
76 |
+
(major_diff == 0 and minor_diff == 0 and 0 < patch_diff <= 2) or
|
77 |
+
(major_diff == 0 and minor_diff == 1 and local_ver[2] == 0) or
|
78 |
+
(major_diff == 1 and local_ver[1] == 0 and local_ver[2] == 0)
|
79 |
+
)
|
80 |
+
else:
|
81 |
+
publish = True # First release
|
82 |
+
|
83 |
+
os.system(f'echo "increment={publish}" >> $GITHUB_OUTPUT')
|
84 |
+
os.system(f'echo "current_tag=v{local_version}" >> $GITHUB_OUTPUT')
|
85 |
+
os.system(f'echo "previous_tag=v{online_version}" >> $GITHUB_OUTPUT')
|
86 |
+
|
87 |
+
if publish:
|
88 |
+
print('Ready to publish new version to PyPI ✅.')
|
89 |
+
id: check_pypi
|
90 |
+
- name: Build package
|
91 |
+
if: (github.event_name == 'push' || github.event.inputs.pypi == 'true') && steps.check_pypi.outputs.increment == 'True'
|
92 |
+
run: python -m build
|
93 |
+
- name: Publish to PyPI
|
94 |
+
continue-on-error: true
|
95 |
+
if: (github.event_name == 'push' || github.event.inputs.pypi == 'true') && steps.check_pypi.outputs.increment == 'True'
|
96 |
+
uses: pypa/gh-action-pypi-publish@release/v1
|
97 |
+
- name: Publish new tag
|
98 |
+
if: (github.event_name == 'push' || github.event.inputs.pypi == 'true') && steps.check_pypi.outputs.increment == 'True'
|
99 |
+
run: |
|
100 |
+
git tag -a "${{ steps.check_pypi.outputs.current_tag }}" -m "$(git log -1 --pretty=%B)" # i.e. "v0.1.2 commit message"
|
101 |
+
git push origin "${{ steps.check_pypi.outputs.current_tag }}"
|
102 |
+
- name: Publish new release
|
103 |
+
if: (github.event_name == 'push' || github.event.inputs.pypi == 'true') && steps.check_pypi.outputs.increment == 'True'
|
104 |
+
env:
|
105 |
+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
106 |
+
GITHUB_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }}
|
107 |
+
CURRENT_TAG: ${{ steps.check_pypi.outputs.current_tag }}
|
108 |
+
PREVIOUS_TAG: ${{ steps.check_pypi.outputs.previous_tag }}
|
109 |
+
run: |
|
110 |
+
curl -s "https://raw.githubusercontent.com/ultralytics/actions/main/utils/summarize_release.py" | python -
|
111 |
+
shell: bash
|
112 |
+
- name: Extract PR Details
|
113 |
+
env:
|
114 |
+
GH_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }}
|
115 |
+
run: |
|
116 |
+
# Check if the event is a pull request or pull_request_target
|
117 |
+
if [ "${{ github.event_name }}" = "pull_request" ] || [ "${{ github.event_name }}" = "pull_request_target" ]; then
|
118 |
+
PR_NUMBER=${{ github.event.pull_request.number }}
|
119 |
+
PR_TITLE=$(gh pr view $PR_NUMBER --json title --jq '.title')
|
120 |
+
else
|
121 |
+
# Use gh to find the PR associated with the commit
|
122 |
+
COMMIT_SHA=${{ github.event.after }}
|
123 |
+
PR_JSON=$(gh pr list --search "${COMMIT_SHA}" --state merged --json number,title --jq '.[0]')
|
124 |
+
PR_NUMBER=$(echo $PR_JSON | jq -r '.number')
|
125 |
+
PR_TITLE=$(echo $PR_JSON | jq -r '.title')
|
126 |
+
fi
|
127 |
+
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
|
128 |
+
echo "PR_TITLE=$PR_TITLE" >> $GITHUB_ENV
|
129 |
+
- name: Notify on Slack (Success)
|
130 |
+
if: success() && github.event_name == 'push' && steps.check_pypi.outputs.increment == 'True'
|
131 |
+
uses: slackapi/[email protected]
|
132 |
+
with:
|
133 |
+
payload: |
|
134 |
+
{"text": "<!channel> GitHub Actions success for ${{ github.workflow }} ✅\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* NEW '${{ github.repository }} ${{ steps.check_pypi.outputs.current_tag }}' pip package published 😃\n*Job Status:* ${{ job.status }}\n*Pull Request:* <https://github.com/${{ github.repository }}/pull/${{ env.PR_NUMBER }}> ${{ env.PR_TITLE }}\n"}
|
135 |
+
env:
|
136 |
+
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
|
137 |
+
- name: Notify on Slack (Failure)
|
138 |
+
if: failure()
|
139 |
+
uses: slackapi/[email protected]
|
140 |
+
with:
|
141 |
+
payload: |
|
142 |
+
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n*Job Status:* ${{ job.status }}\n*Pull Request:* <https://github.com/${{ github.repository }}/pull/${{ env.PR_NUMBER }}> ${{ env.PR_TITLE }}\n"}
|
143 |
+
env:
|
144 |
+
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
|
.github/workflows/stale.yml
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
|
3 |
+
name: Close stale issues
|
4 |
+
on:
|
5 |
+
schedule:
|
6 |
+
- cron: "0 0 * * *" # Runs at 00:00 UTC every day
|
7 |
+
|
8 |
+
jobs:
|
9 |
+
stale:
|
10 |
+
runs-on: ubuntu-latest
|
11 |
+
steps:
|
12 |
+
- uses: actions/stale@v9
|
13 |
+
with:
|
14 |
+
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
15 |
+
|
16 |
+
stale-issue-message: |
|
17 |
+
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
|
18 |
+
|
19 |
+
For additional resources and information, please see the links below:
|
20 |
+
|
21 |
+
- **Docs**: https://docs.ultralytics.com
|
22 |
+
- **HUB**: https://hub.ultralytics.com
|
23 |
+
- **Community**: https://community.ultralytics.com
|
24 |
+
|
25 |
+
Feel free to inform us of any other **issues** you discover or **feature requests** that come to mind in the future. Pull Requests (PRs) are also always welcomed!
|
26 |
+
|
27 |
+
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
|
28 |
+
|
29 |
+
stale-pr-message: |
|
30 |
+
👋 Hello there! We wanted to let you know that we've decided to close this pull request due to inactivity. We appreciate the effort you put into contributing to our project, but unfortunately, not all contributions are suitable or aligned with our product roadmap.
|
31 |
+
|
32 |
+
We hope you understand our decision, and please don't let it discourage you from contributing to open source projects in the future. We value all of our community members and their contributions, and we encourage you to keep exploring new projects and ways to get involved.
|
33 |
+
|
34 |
+
For additional resources and information, please see the links below:
|
35 |
+
|
36 |
+
- **Docs**: https://docs.ultralytics.com
|
37 |
+
- **HUB**: https://hub.ultralytics.com
|
38 |
+
- **Community**: https://community.ultralytics.com
|
39 |
+
|
40 |
+
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
|
41 |
+
|
42 |
+
days-before-issue-stale: 30
|
43 |
+
days-before-issue-close: 10
|
44 |
+
days-before-pr-stale: 90
|
45 |
+
days-before-pr-close: 30
|
46 |
+
exempt-issue-labels: "documentation,tutorial,TODO"
|
47 |
+
operations-per-run: 300 # The maximum number of operations per run, used to control rate limiting.
|
.gitignore
ADDED
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Byte-compiled / optimized / DLL files
|
2 |
+
__pycache__/
|
3 |
+
*.py[cod]
|
4 |
+
*$py.class
|
5 |
+
|
6 |
+
# C extensions
|
7 |
+
*.so
|
8 |
+
|
9 |
+
# Distribution / packaging
|
10 |
+
.Python
|
11 |
+
build/
|
12 |
+
develop-eggs/
|
13 |
+
dist/
|
14 |
+
downloads/
|
15 |
+
eggs/
|
16 |
+
.eggs/
|
17 |
+
lib/
|
18 |
+
lib64/
|
19 |
+
parts/
|
20 |
+
sdist/
|
21 |
+
var/
|
22 |
+
wheels/
|
23 |
+
pip-wheel-metadata/
|
24 |
+
share/python-wheels/
|
25 |
+
*.egg-info/
|
26 |
+
.installed.cfg
|
27 |
+
*.egg
|
28 |
+
MANIFEST
|
29 |
+
requirements.txt
|
30 |
+
setup.py
|
31 |
+
ultralytics.egg-info
|
32 |
+
|
33 |
+
# PyInstaller
|
34 |
+
# Usually these files are written by a python script from a template
|
35 |
+
# before PyInstaller builds the exe, so as to inject date/other info into it.
|
36 |
+
*.manifest
|
37 |
+
*.spec
|
38 |
+
|
39 |
+
# Installer logs
|
40 |
+
pip-log.txt
|
41 |
+
pip-delete-this-directory.txt
|
42 |
+
|
43 |
+
# Unit test / coverage reports
|
44 |
+
htmlcov/
|
45 |
+
.tox/
|
46 |
+
.nox/
|
47 |
+
.coverage
|
48 |
+
.coverage.*
|
49 |
+
.cache
|
50 |
+
nosetests.xml
|
51 |
+
coverage.xml
|
52 |
+
*.cover
|
53 |
+
*.py,cover
|
54 |
+
.hypothesis/
|
55 |
+
.pytest_cache/
|
56 |
+
mlruns/
|
57 |
+
|
58 |
+
# Translations
|
59 |
+
*.mo
|
60 |
+
*.pot
|
61 |
+
|
62 |
+
# Django stuff:
|
63 |
+
*.log
|
64 |
+
local_settings.py
|
65 |
+
db.sqlite3
|
66 |
+
db.sqlite3-journal
|
67 |
+
|
68 |
+
# Flask stuff:
|
69 |
+
instance/
|
70 |
+
.webassets-cache
|
71 |
+
|
72 |
+
# Scrapy stuff:
|
73 |
+
.scrapy
|
74 |
+
|
75 |
+
# Sphinx documentation
|
76 |
+
docs/_build/
|
77 |
+
|
78 |
+
# PyBuilder
|
79 |
+
target/
|
80 |
+
|
81 |
+
# Jupyter Notebook
|
82 |
+
.ipynb_checkpoints
|
83 |
+
|
84 |
+
# IPython
|
85 |
+
profile_default/
|
86 |
+
ipython_config.py
|
87 |
+
|
88 |
+
# Profiling
|
89 |
+
*.pclprof
|
90 |
+
|
91 |
+
# pyenv
|
92 |
+
.python-version
|
93 |
+
|
94 |
+
# pipenv
|
95 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
96 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
97 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
98 |
+
# install all needed dependencies.
|
99 |
+
#Pipfile.lock
|
100 |
+
|
101 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
102 |
+
__pypackages__/
|
103 |
+
|
104 |
+
# Celery stuff
|
105 |
+
celerybeat-schedule
|
106 |
+
celerybeat.pid
|
107 |
+
|
108 |
+
# SageMath parsed files
|
109 |
+
*.sage.py
|
110 |
+
|
111 |
+
# Environments
|
112 |
+
.env
|
113 |
+
.venv
|
114 |
+
.idea
|
115 |
+
env/
|
116 |
+
venv/
|
117 |
+
ENV/
|
118 |
+
env.bak/
|
119 |
+
venv.bak/
|
120 |
+
|
121 |
+
# Spyder project settings
|
122 |
+
.spyderproject
|
123 |
+
.spyproject
|
124 |
+
|
125 |
+
# VSCode project settings
|
126 |
+
.vscode/
|
127 |
+
|
128 |
+
# Rope project settings
|
129 |
+
.ropeproject
|
130 |
+
|
131 |
+
# mkdocs documentation
|
132 |
+
/site
|
133 |
+
|
134 |
+
# mypy
|
135 |
+
.mypy_cache/
|
136 |
+
.dmypy.json
|
137 |
+
dmypy.json
|
138 |
+
|
139 |
+
# Pyre type checker
|
140 |
+
.pyre/
|
141 |
+
|
142 |
+
# datasets and projects (ignore /datasets dir at root only to allow for docs/en/datasets dir)
|
143 |
+
/datasets
|
144 |
+
runs/
|
145 |
+
wandb/
|
146 |
+
.DS_Store
|
147 |
+
|
148 |
+
# Neural Network weights -----------------------------------------------------------------------------------------------
|
149 |
+
weights/
|
150 |
+
*.weights
|
151 |
+
*.pt
|
152 |
+
*.pb
|
153 |
+
*.onnx
|
154 |
+
*.engine
|
155 |
+
*.mlmodel
|
156 |
+
*.mlpackage
|
157 |
+
*.torchscript
|
158 |
+
*.tflite
|
159 |
+
*.h5
|
160 |
+
*_saved_model/
|
161 |
+
*_web_model/
|
162 |
+
*_openvino_model/
|
163 |
+
*_paddle_model/
|
164 |
+
*_ncnn_model/
|
165 |
+
pnnx*
|
166 |
+
|
167 |
+
# Autogenerated files for tests
|
168 |
+
/ultralytics/assets/
|
169 |
+
|
170 |
+
# calibration image
|
171 |
+
calibration_*.npy
|
CITATION.cff
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This CITATION.cff file was generated with https://bit.ly/cffinit
|
2 |
+
|
3 |
+
cff-version: 1.2.0
|
4 |
+
title: Ultralytics YOLO
|
5 |
+
message: >-
|
6 |
+
If you use this software, please cite it using the
|
7 |
+
metadata from this file.
|
8 |
+
type: software
|
9 |
+
authors:
|
10 |
+
- given-names: Glenn
|
11 |
+
family-names: Jocher
|
12 |
+
affiliation: Ultralytics
|
13 |
+
orcid: 'https://orcid.org/0000-0001-5950-6979'
|
14 |
+
- family-names: Qiu
|
15 |
+
given-names: Jing
|
16 |
+
affiliation: Ultralytics
|
17 |
+
orcid: 'https://orcid.org/0000-0003-3783-7069'
|
18 |
+
- given-names: Ayush
|
19 |
+
family-names: Chaurasia
|
20 |
+
affiliation: Ultralytics
|
21 |
+
orcid: 'https://orcid.org/0000-0002-7603-6750'
|
22 |
+
repository-code: 'https://github.com/ultralytics/ultralytics'
|
23 |
+
url: 'https://ultralytics.com'
|
24 |
+
license: AGPL-3.0
|
25 |
+
version: 8.0.0
|
26 |
+
date-released: '2023-01-10'
|
CONTRIBUTING.md
ADDED
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Learn how to contribute to Ultralytics YOLO open-source repositories. Follow guidelines for pull requests, code of conduct, and bug reporting.
|
4 |
+
keywords: Ultralytics, YOLO, open-source, contribution, pull request, code of conduct, bug reporting, GitHub, CLA, Google-style docstrings
|
5 |
+
---
|
6 |
+
|
7 |
+
# Contributing to Ultralytics Open-Source Projects
|
8 |
+
|
9 |
+
Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://www.ultralytics.com/) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started.
|
10 |
+
|
11 |
+
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
|
12 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a>
|
13 |
+
|
14 |
+
## Table of Contents
|
15 |
+
|
16 |
+
1. [Code of Conduct](#code-of-conduct)
|
17 |
+
2. [Contributing via Pull Requests](#contributing-via-pull-requests)
|
18 |
+
- [CLA Signing](#cla-signing)
|
19 |
+
- [Google-Style Docstrings](#google-style-docstrings)
|
20 |
+
- [GitHub Actions CI Tests](#github-actions-ci-tests)
|
21 |
+
3. [Reporting Bugs](#reporting-bugs)
|
22 |
+
4. [License](#license)
|
23 |
+
5. [Conclusion](#conclusion)
|
24 |
+
6. [FAQ](#faq)
|
25 |
+
|
26 |
+
## Code of Conduct
|
27 |
+
|
28 |
+
To ensure a welcoming and inclusive environment for everyone, all contributors must adhere to our [Code of Conduct](https://docs.ultralytics.com/help/code_of_conduct/). Respect, kindness, and professionalism are at the heart of our community.
|
29 |
+
|
30 |
+
## Contributing via Pull Requests
|
31 |
+
|
32 |
+
We greatly appreciate contributions in the form of pull requests. To make the review process as smooth as possible, please follow these steps:
|
33 |
+
|
34 |
+
1. **[Fork the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo):** Start by forking the Ultralytics YOLO repository to your GitHub account.
|
35 |
+
|
36 |
+
2. **[Create a branch](https://docs.github.com/en/desktop/making-changes-in-a-branch/managing-branches-in-github-desktop):** Create a new branch in your forked repository with a clear, descriptive name that reflects your changes.
|
37 |
+
|
38 |
+
3. **Make your changes:** Ensure your code adheres to the project's style guidelines and does not introduce any new errors or warnings.
|
39 |
+
|
40 |
+
4. **[Test your changes](https://github.com/ultralytics/ultralytics/tree/main/tests):** Before submitting, test your changes locally to confirm they work as expected and don't cause any new issues.
|
41 |
+
|
42 |
+
5. **[Commit your changes](https://docs.github.com/en/desktop/making-changes-in-a-branch/committing-and-reviewing-changes-to-your-project-in-github-desktop):** Commit your changes with a concise and descriptive commit message. If your changes address a specific issue, include the issue number in your commit message.
|
43 |
+
|
44 |
+
6. **[Create a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request):** Submit a pull request from your forked repository to the main Ultralytics YOLO repository. Provide a clear and detailed explanation of your changes and how they improve the project.
|
45 |
+
|
46 |
+
### CLA Signing
|
47 |
+
|
48 |
+
Before we can merge your pull request, you must sign our [Contributor License Agreement (CLA)](https://docs.ultralytics.com/help/CLA/). This legal agreement ensures that your contributions are properly licensed, allowing the project to continue being distributed under the AGPL-3.0 license.
|
49 |
+
|
50 |
+
After submitting your pull request, the CLA bot will guide you through the signing process. To sign the CLA, simply add a comment in your PR stating:
|
51 |
+
|
52 |
+
```
|
53 |
+
I have read the CLA Document and I sign the CLA
|
54 |
+
```
|
55 |
+
|
56 |
+
### Google-Style Docstrings
|
57 |
+
|
58 |
+
When adding new functions or classes, please include [Google-style docstrings](https://google.github.io/styleguide/pyguide.html). These docstrings provide clear, standardized documentation that helps other developers understand and maintain your code.
|
59 |
+
|
60 |
+
#### Example
|
61 |
+
|
62 |
+
This example illustrates a Google-style docstring. Ensure that both input and output `types` are always enclosed in parentheses, e.g., `(bool)`.
|
63 |
+
|
64 |
+
```python
|
65 |
+
def example_function(arg1, arg2=4):
|
66 |
+
"""
|
67 |
+
Example function demonstrating Google-style docstrings.
|
68 |
+
|
69 |
+
Args:
|
70 |
+
arg1 (int): The first argument.
|
71 |
+
arg2 (int): The second argument, with a default value of 4.
|
72 |
+
|
73 |
+
Returns:
|
74 |
+
(bool): True if successful, False otherwise.
|
75 |
+
|
76 |
+
Examples:
|
77 |
+
>>> result = example_function(1, 2) # returns False
|
78 |
+
"""
|
79 |
+
if arg1 == arg2:
|
80 |
+
return True
|
81 |
+
return False
|
82 |
+
```
|
83 |
+
|
84 |
+
#### Example with type hints
|
85 |
+
|
86 |
+
This example includes both a Google-style docstring and type hints for arguments and returns, though using either independently is also acceptable.
|
87 |
+
|
88 |
+
```python
|
89 |
+
def example_function(arg1: int, arg2: int = 4) -> bool:
|
90 |
+
"""
|
91 |
+
Example function demonstrating Google-style docstrings.
|
92 |
+
|
93 |
+
Args:
|
94 |
+
arg1: The first argument.
|
95 |
+
arg2: The second argument, with a default value of 4.
|
96 |
+
|
97 |
+
Returns:
|
98 |
+
True if successful, False otherwise.
|
99 |
+
|
100 |
+
Examples:
|
101 |
+
>>> result = example_function(1, 2) # returns False
|
102 |
+
"""
|
103 |
+
if arg1 == arg2:
|
104 |
+
return True
|
105 |
+
return False
|
106 |
+
```
|
107 |
+
|
108 |
+
#### Example Single-line
|
109 |
+
|
110 |
+
For smaller or simpler functions, a single-line docstring may be sufficient. The docstring must use three double-quotes, be a complete sentence, start with a capital letter, and end with a period.
|
111 |
+
|
112 |
+
```python
|
113 |
+
def example_small_function(arg1: int, arg2: int = 4) -> bool:
|
114 |
+
"""Example function with a single-line docstring."""
|
115 |
+
return arg1 == arg2
|
116 |
+
```
|
117 |
+
|
118 |
+
### GitHub Actions CI Tests
|
119 |
+
|
120 |
+
All pull requests must pass the GitHub Actions [Continuous Integration](https://docs.ultralytics.com/help/CI/) (CI) tests before they can be merged. These tests include linting, unit tests, and other checks to ensure that your changes meet the project's quality standards. Review the CI output and address any issues that arise.
|
121 |
+
|
122 |
+
## Reporting Bugs
|
123 |
+
|
124 |
+
We highly value bug reports as they help us maintain the quality of our projects. When reporting a bug, please provide a [Minimum Reproducible Example](https://docs.ultralytics.com/help/minimum_reproducible_example/)—a simple, clear code example that consistently reproduces the issue. This allows us to quickly identify and resolve the problem.
|
125 |
+
|
126 |
+
## License
|
127 |
+
|
128 |
+
Ultralytics uses the [GNU Affero General Public License v3.0 (AGPL-3.0)](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) for its repositories. This license promotes openness, transparency, and collaborative improvement in software development. It ensures that all users have the freedom to use, modify, and share the software, fostering a strong community of collaboration and innovation.
|
129 |
+
|
130 |
+
We encourage all contributors to familiarize themselves with the terms of the AGPL-3.0 license to contribute effectively and ethically to the Ultralytics open-source community.
|
131 |
+
|
132 |
+
## Conclusion
|
133 |
+
|
134 |
+
Thank you for your interest in contributing to [Ultralytics](https://www.ultralytics.com/) [open-source](https://github.com/ultralytics) YOLO projects. Your participation is essential in shaping the future of our software and building a vibrant community of innovation and collaboration. Whether you're enhancing code, reporting bugs, or suggesting new features, your contributions are invaluable.
|
135 |
+
|
136 |
+
We're excited to see your ideas come to life and appreciate your commitment to advancing object detection technology. Together, let's continue to grow and innovate in this exciting open-source journey. Happy coding! 🚀🌟
|
137 |
+
|
138 |
+
## FAQ
|
139 |
+
|
140 |
+
### Why should I contribute to Ultralytics YOLO open-source repositories?
|
141 |
+
|
142 |
+
Contributing to Ultralytics YOLO open-source repositories improves the software, making it more robust and feature-rich for the entire community. Contributions can include code enhancements, bug fixes, documentation improvements, and new feature implementations. Additionally, contributing allows you to collaborate with other skilled developers and experts in the field, enhancing your own skills and reputation. For details on how to get started, refer to the [Contributing via Pull Requests](#contributing-via-pull-requests) section.
|
143 |
+
|
144 |
+
### How do I sign the Contributor License Agreement (CLA) for Ultralytics YOLO?
|
145 |
+
|
146 |
+
To sign the Contributor License Agreement (CLA), follow the instructions provided by the CLA bot after submitting your pull request. This process ensures that your contributions are properly licensed under the AGPL-3.0 license, maintaining the legal integrity of the open-source project. Add a comment in your pull request stating:
|
147 |
+
|
148 |
+
```
|
149 |
+
I have read the CLA Document and I sign the CLA.
|
150 |
+
```
|
151 |
+
|
152 |
+
For more information, see the [CLA Signing](#cla-signing) section.
|
153 |
+
|
154 |
+
### What are Google-style docstrings, and why are they required for Ultralytics YOLO contributions?
|
155 |
+
|
156 |
+
Google-style docstrings provide clear, concise documentation for functions and classes, improving code readability and maintainability. These docstrings outline the function's purpose, arguments, and return values with specific formatting rules. When contributing to Ultralytics YOLO, following Google-style docstrings ensures that your additions are well-documented and easily understood. For examples and guidelines, visit the [Google-Style Docstrings](#google-style-docstrings) section.
|
157 |
+
|
158 |
+
### How can I ensure my changes pass the GitHub Actions CI tests?
|
159 |
+
|
160 |
+
Before your pull request can be merged, it must pass all GitHub Actions Continuous Integration (CI) tests. These tests include linting, unit tests, and other checks to ensure the code meets
|
161 |
+
|
162 |
+
the project's quality standards. Review the CI output and fix any issues. For detailed information on the CI process and troubleshooting tips, see the [GitHub Actions CI Tests](#github-actions-ci-tests) section.
|
163 |
+
|
164 |
+
### How do I report a bug in Ultralytics YOLO repositories?
|
165 |
+
|
166 |
+
To report a bug, provide a clear and concise [Minimum Reproducible Example](https://docs.ultralytics.com/help/minimum_reproducible_example/) along with your bug report. This helps developers quickly identify and fix the issue. Ensure your example is minimal yet sufficient to replicate the problem. For more detailed steps on reporting bugs, refer to the [Reporting Bugs](#reporting-bugs) section.
|
LICENSE
ADDED
@@ -0,0 +1,661 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
GNU AFFERO GENERAL PUBLIC LICENSE
|
2 |
+
Version 3, 19 November 2007
|
3 |
+
|
4 |
+
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
5 |
+
Everyone is permitted to copy and distribute verbatim copies
|
6 |
+
of this license document, but changing it is not allowed.
|
7 |
+
|
8 |
+
Preamble
|
9 |
+
|
10 |
+
The GNU Affero General Public License is a free, copyleft license for
|
11 |
+
software and other kinds of works, specifically designed to ensure
|
12 |
+
cooperation with the community in the case of network server software.
|
13 |
+
|
14 |
+
The licenses for most software and other practical works are designed
|
15 |
+
to take away your freedom to share and change the works. By contrast,
|
16 |
+
our General Public Licenses are intended to guarantee your freedom to
|
17 |
+
share and change all versions of a program--to make sure it remains free
|
18 |
+
software for all its users.
|
19 |
+
|
20 |
+
When we speak of free software, we are referring to freedom, not
|
21 |
+
price. Our General Public Licenses are designed to make sure that you
|
22 |
+
have the freedom to distribute copies of free software (and charge for
|
23 |
+
them if you wish), that you receive source code or can get it if you
|
24 |
+
want it, that you can change the software or use pieces of it in new
|
25 |
+
free programs, and that you know you can do these things.
|
26 |
+
|
27 |
+
Developers that use our General Public Licenses protect your rights
|
28 |
+
with two steps: (1) assert copyright on the software, and (2) offer
|
29 |
+
you this License which gives you legal permission to copy, distribute
|
30 |
+
and/or modify the software.
|
31 |
+
|
32 |
+
A secondary benefit of defending all users' freedom is that
|
33 |
+
improvements made in alternate versions of the program, if they
|
34 |
+
receive widespread use, become available for other developers to
|
35 |
+
incorporate. Many developers of free software are heartened and
|
36 |
+
encouraged by the resulting cooperation. However, in the case of
|
37 |
+
software used on network servers, this result may fail to come about.
|
38 |
+
The GNU General Public License permits making a modified version and
|
39 |
+
letting the public access it on a server without ever releasing its
|
40 |
+
source code to the public.
|
41 |
+
|
42 |
+
The GNU Affero General Public License is designed specifically to
|
43 |
+
ensure that, in such cases, the modified source code becomes available
|
44 |
+
to the community. It requires the operator of a network server to
|
45 |
+
provide the source code of the modified version running there to the
|
46 |
+
users of that server. Therefore, public use of a modified version, on
|
47 |
+
a publicly accessible server, gives the public access to the source
|
48 |
+
code of the modified version.
|
49 |
+
|
50 |
+
An older license, called the Affero General Public License and
|
51 |
+
published by Affero, was designed to accomplish similar goals. This is
|
52 |
+
a different license, not a version of the Affero GPL, but Affero has
|
53 |
+
released a new version of the Affero GPL which permits relicensing under
|
54 |
+
this license.
|
55 |
+
|
56 |
+
The precise terms and conditions for copying, distribution and
|
57 |
+
modification follow.
|
58 |
+
|
59 |
+
TERMS AND CONDITIONS
|
60 |
+
|
61 |
+
0. Definitions.
|
62 |
+
|
63 |
+
"This License" refers to version 3 of the GNU Affero General Public License.
|
64 |
+
|
65 |
+
"Copyright" also means copyright-like laws that apply to other kinds of
|
66 |
+
works, such as semiconductor masks.
|
67 |
+
|
68 |
+
"The Program" refers to any copyrightable work licensed under this
|
69 |
+
License. Each licensee is addressed as "you". "Licensees" and
|
70 |
+
"recipients" may be individuals or organizations.
|
71 |
+
|
72 |
+
To "modify" a work means to copy from or adapt all or part of the work
|
73 |
+
in a fashion requiring copyright permission, other than the making of an
|
74 |
+
exact copy. The resulting work is called a "modified version" of the
|
75 |
+
earlier work or a work "based on" the earlier work.
|
76 |
+
|
77 |
+
A "covered work" means either the unmodified Program or a work based
|
78 |
+
on the Program.
|
79 |
+
|
80 |
+
To "propagate" a work means to do anything with it that, without
|
81 |
+
permission, would make you directly or secondarily liable for
|
82 |
+
infringement under applicable copyright law, except executing it on a
|
83 |
+
computer or modifying a private copy. Propagation includes copying,
|
84 |
+
distribution (with or without modification), making available to the
|
85 |
+
public, and in some countries other activities as well.
|
86 |
+
|
87 |
+
To "convey" a work means any kind of propagation that enables other
|
88 |
+
parties to make or receive copies. Mere interaction with a user through
|
89 |
+
a computer network, with no transfer of a copy, is not conveying.
|
90 |
+
|
91 |
+
An interactive user interface displays "Appropriate Legal Notices"
|
92 |
+
to the extent that it includes a convenient and prominently visible
|
93 |
+
feature that (1) displays an appropriate copyright notice, and (2)
|
94 |
+
tells the user that there is no warranty for the work (except to the
|
95 |
+
extent that warranties are provided), that licensees may convey the
|
96 |
+
work under this License, and how to view a copy of this License. If
|
97 |
+
the interface presents a list of user commands or options, such as a
|
98 |
+
menu, a prominent item in the list meets this criterion.
|
99 |
+
|
100 |
+
1. Source Code.
|
101 |
+
|
102 |
+
The "source code" for a work means the preferred form of the work
|
103 |
+
for making modifications to it. "Object code" means any non-source
|
104 |
+
form of a work.
|
105 |
+
|
106 |
+
A "Standard Interface" means an interface that either is an official
|
107 |
+
standard defined by a recognized standards body, or, in the case of
|
108 |
+
interfaces specified for a particular programming language, one that
|
109 |
+
is widely used among developers working in that language.
|
110 |
+
|
111 |
+
The "System Libraries" of an executable work include anything, other
|
112 |
+
than the work as a whole, that (a) is included in the normal form of
|
113 |
+
packaging a Major Component, but which is not part of that Major
|
114 |
+
Component, and (b) serves only to enable use of the work with that
|
115 |
+
Major Component, or to implement a Standard Interface for which an
|
116 |
+
implementation is available to the public in source code form. A
|
117 |
+
"Major Component", in this context, means a major essential component
|
118 |
+
(kernel, window system, and so on) of the specific operating system
|
119 |
+
(if any) on which the executable work runs, or a compiler used to
|
120 |
+
produce the work, or an object code interpreter used to run it.
|
121 |
+
|
122 |
+
The "Corresponding Source" for a work in object code form means all
|
123 |
+
the source code needed to generate, install, and (for an executable
|
124 |
+
work) run the object code and to modify the work, including scripts to
|
125 |
+
control those activities. However, it does not include the work's
|
126 |
+
System Libraries, or general-purpose tools or generally available free
|
127 |
+
programs which are used unmodified in performing those activities but
|
128 |
+
which are not part of the work. For example, Corresponding Source
|
129 |
+
includes interface definition files associated with source files for
|
130 |
+
the work, and the source code for shared libraries and dynamically
|
131 |
+
linked subprograms that the work is specifically designed to require,
|
132 |
+
such as by intimate data communication or control flow between those
|
133 |
+
subprograms and other parts of the work.
|
134 |
+
|
135 |
+
The Corresponding Source need not include anything that users
|
136 |
+
can regenerate automatically from other parts of the Corresponding
|
137 |
+
Source.
|
138 |
+
|
139 |
+
The Corresponding Source for a work in source code form is that
|
140 |
+
same work.
|
141 |
+
|
142 |
+
2. Basic Permissions.
|
143 |
+
|
144 |
+
All rights granted under this License are granted for the term of
|
145 |
+
copyright on the Program, and are irrevocable provided the stated
|
146 |
+
conditions are met. This License explicitly affirms your unlimited
|
147 |
+
permission to run the unmodified Program. The output from running a
|
148 |
+
covered work is covered by this License only if the output, given its
|
149 |
+
content, constitutes a covered work. This License acknowledges your
|
150 |
+
rights of fair use or other equivalent, as provided by copyright law.
|
151 |
+
|
152 |
+
You may make, run and propagate covered works that you do not
|
153 |
+
convey, without conditions so long as your license otherwise remains
|
154 |
+
in force. You may convey covered works to others for the sole purpose
|
155 |
+
of having them make modifications exclusively for you, or provide you
|
156 |
+
with facilities for running those works, provided that you comply with
|
157 |
+
the terms of this License in conveying all material for which you do
|
158 |
+
not control copyright. Those thus making or running the covered works
|
159 |
+
for you must do so exclusively on your behalf, under your direction
|
160 |
+
and control, on terms that prohibit them from making any copies of
|
161 |
+
your copyrighted material outside their relationship with you.
|
162 |
+
|
163 |
+
Conveying under any other circumstances is permitted solely under
|
164 |
+
the conditions stated below. Sublicensing is not allowed; section 10
|
165 |
+
makes it unnecessary.
|
166 |
+
|
167 |
+
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
168 |
+
|
169 |
+
No covered work shall be deemed part of an effective technological
|
170 |
+
measure under any applicable law fulfilling obligations under article
|
171 |
+
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
172 |
+
similar laws prohibiting or restricting circumvention of such
|
173 |
+
measures.
|
174 |
+
|
175 |
+
When you convey a covered work, you waive any legal power to forbid
|
176 |
+
circumvention of technological measures to the extent such circumvention
|
177 |
+
is effected by exercising rights under this License with respect to
|
178 |
+
the covered work, and you disclaim any intention to limit operation or
|
179 |
+
modification of the work as a means of enforcing, against the work's
|
180 |
+
users, your or third parties' legal rights to forbid circumvention of
|
181 |
+
technological measures.
|
182 |
+
|
183 |
+
4. Conveying Verbatim Copies.
|
184 |
+
|
185 |
+
You may convey verbatim copies of the Program's source code as you
|
186 |
+
receive it, in any medium, provided that you conspicuously and
|
187 |
+
appropriately publish on each copy an appropriate copyright notice;
|
188 |
+
keep intact all notices stating that this License and any
|
189 |
+
non-permissive terms added in accord with section 7 apply to the code;
|
190 |
+
keep intact all notices of the absence of any warranty; and give all
|
191 |
+
recipients a copy of this License along with the Program.
|
192 |
+
|
193 |
+
You may charge any price or no price for each copy that you convey,
|
194 |
+
and you may offer support or warranty protection for a fee.
|
195 |
+
|
196 |
+
5. Conveying Modified Source Versions.
|
197 |
+
|
198 |
+
You may convey a work based on the Program, or the modifications to
|
199 |
+
produce it from the Program, in the form of source code under the
|
200 |
+
terms of section 4, provided that you also meet all of these conditions:
|
201 |
+
|
202 |
+
a) The work must carry prominent notices stating that you modified
|
203 |
+
it, and giving a relevant date.
|
204 |
+
|
205 |
+
b) The work must carry prominent notices stating that it is
|
206 |
+
released under this License and any conditions added under section
|
207 |
+
7. This requirement modifies the requirement in section 4 to
|
208 |
+
"keep intact all notices".
|
209 |
+
|
210 |
+
c) You must license the entire work, as a whole, under this
|
211 |
+
License to anyone who comes into possession of a copy. This
|
212 |
+
License will therefore apply, along with any applicable section 7
|
213 |
+
additional terms, to the whole of the work, and all its parts,
|
214 |
+
regardless of how they are packaged. This License gives no
|
215 |
+
permission to license the work in any other way, but it does not
|
216 |
+
invalidate such permission if you have separately received it.
|
217 |
+
|
218 |
+
d) If the work has interactive user interfaces, each must display
|
219 |
+
Appropriate Legal Notices; however, if the Program has interactive
|
220 |
+
interfaces that do not display Appropriate Legal Notices, your
|
221 |
+
work need not make them do so.
|
222 |
+
|
223 |
+
A compilation of a covered work with other separate and independent
|
224 |
+
works, which are not by their nature extensions of the covered work,
|
225 |
+
and which are not combined with it such as to form a larger program,
|
226 |
+
in or on a volume of a storage or distribution medium, is called an
|
227 |
+
"aggregate" if the compilation and its resulting copyright are not
|
228 |
+
used to limit the access or legal rights of the compilation's users
|
229 |
+
beyond what the individual works permit. Inclusion of a covered work
|
230 |
+
in an aggregate does not cause this License to apply to the other
|
231 |
+
parts of the aggregate.
|
232 |
+
|
233 |
+
6. Conveying Non-Source Forms.
|
234 |
+
|
235 |
+
You may convey a covered work in object code form under the terms
|
236 |
+
of sections 4 and 5, provided that you also convey the
|
237 |
+
machine-readable Corresponding Source under the terms of this License,
|
238 |
+
in one of these ways:
|
239 |
+
|
240 |
+
a) Convey the object code in, or embodied in, a physical product
|
241 |
+
(including a physical distribution medium), accompanied by the
|
242 |
+
Corresponding Source fixed on a durable physical medium
|
243 |
+
customarily used for software interchange.
|
244 |
+
|
245 |
+
b) Convey the object code in, or embodied in, a physical product
|
246 |
+
(including a physical distribution medium), accompanied by a
|
247 |
+
written offer, valid for at least three years and valid for as
|
248 |
+
long as you offer spare parts or customer support for that product
|
249 |
+
model, to give anyone who possesses the object code either (1) a
|
250 |
+
copy of the Corresponding Source for all the software in the
|
251 |
+
product that is covered by this License, on a durable physical
|
252 |
+
medium customarily used for software interchange, for a price no
|
253 |
+
more than your reasonable cost of physically performing this
|
254 |
+
conveying of source, or (2) access to copy the
|
255 |
+
Corresponding Source from a network server at no charge.
|
256 |
+
|
257 |
+
c) Convey individual copies of the object code with a copy of the
|
258 |
+
written offer to provide the Corresponding Source. This
|
259 |
+
alternative is allowed only occasionally and noncommercially, and
|
260 |
+
only if you received the object code with such an offer, in accord
|
261 |
+
with subsection 6b.
|
262 |
+
|
263 |
+
d) Convey the object code by offering access from a designated
|
264 |
+
place (gratis or for a charge), and offer equivalent access to the
|
265 |
+
Corresponding Source in the same way through the same place at no
|
266 |
+
further charge. You need not require recipients to copy the
|
267 |
+
Corresponding Source along with the object code. If the place to
|
268 |
+
copy the object code is a network server, the Corresponding Source
|
269 |
+
may be on a different server (operated by you or a third party)
|
270 |
+
that supports equivalent copying facilities, provided you maintain
|
271 |
+
clear directions next to the object code saying where to find the
|
272 |
+
Corresponding Source. Regardless of what server hosts the
|
273 |
+
Corresponding Source, you remain obligated to ensure that it is
|
274 |
+
available for as long as needed to satisfy these requirements.
|
275 |
+
|
276 |
+
e) Convey the object code using peer-to-peer transmission, provided
|
277 |
+
you inform other peers where the object code and Corresponding
|
278 |
+
Source of the work are being offered to the general public at no
|
279 |
+
charge under subsection 6d.
|
280 |
+
|
281 |
+
A separable portion of the object code, whose source code is excluded
|
282 |
+
from the Corresponding Source as a System Library, need not be
|
283 |
+
included in conveying the object code work.
|
284 |
+
|
285 |
+
A "User Product" is either (1) a "consumer product", which means any
|
286 |
+
tangible personal property which is normally used for personal, family,
|
287 |
+
or household purposes, or (2) anything designed or sold for incorporation
|
288 |
+
into a dwelling. In determining whether a product is a consumer product,
|
289 |
+
doubtful cases shall be resolved in favor of coverage. For a particular
|
290 |
+
product received by a particular user, "normally used" refers to a
|
291 |
+
typical or common use of that class of product, regardless of the status
|
292 |
+
of the particular user or of the way in which the particular user
|
293 |
+
actually uses, or expects or is expected to use, the product. A product
|
294 |
+
is a consumer product regardless of whether the product has substantial
|
295 |
+
commercial, industrial or non-consumer uses, unless such uses represent
|
296 |
+
the only significant mode of use of the product.
|
297 |
+
|
298 |
+
"Installation Information" for a User Product means any methods,
|
299 |
+
procedures, authorization keys, or other information required to install
|
300 |
+
and execute modified versions of a covered work in that User Product from
|
301 |
+
a modified version of its Corresponding Source. The information must
|
302 |
+
suffice to ensure that the continued functioning of the modified object
|
303 |
+
code is in no case prevented or interfered with solely because
|
304 |
+
modification has been made.
|
305 |
+
|
306 |
+
If you convey an object code work under this section in, or with, or
|
307 |
+
specifically for use in, a User Product, and the conveying occurs as
|
308 |
+
part of a transaction in which the right of possession and use of the
|
309 |
+
User Product is transferred to the recipient in perpetuity or for a
|
310 |
+
fixed term (regardless of how the transaction is characterized), the
|
311 |
+
Corresponding Source conveyed under this section must be accompanied
|
312 |
+
by the Installation Information. But this requirement does not apply
|
313 |
+
if neither you nor any third party retains the ability to install
|
314 |
+
modified object code on the User Product (for example, the work has
|
315 |
+
been installed in ROM).
|
316 |
+
|
317 |
+
The requirement to provide Installation Information does not include a
|
318 |
+
requirement to continue to provide support service, warranty, or updates
|
319 |
+
for a work that has been modified or installed by the recipient, or for
|
320 |
+
the User Product in which it has been modified or installed. Access to a
|
321 |
+
network may be denied when the modification itself materially and
|
322 |
+
adversely affects the operation of the network or violates the rules and
|
323 |
+
protocols for communication across the network.
|
324 |
+
|
325 |
+
Corresponding Source conveyed, and Installation Information provided,
|
326 |
+
in accord with this section must be in a format that is publicly
|
327 |
+
documented (and with an implementation available to the public in
|
328 |
+
source code form), and must require no special password or key for
|
329 |
+
unpacking, reading or copying.
|
330 |
+
|
331 |
+
7. Additional Terms.
|
332 |
+
|
333 |
+
"Additional permissions" are terms that supplement the terms of this
|
334 |
+
License by making exceptions from one or more of its conditions.
|
335 |
+
Additional permissions that are applicable to the entire Program shall
|
336 |
+
be treated as though they were included in this License, to the extent
|
337 |
+
that they are valid under applicable law. If additional permissions
|
338 |
+
apply only to part of the Program, that part may be used separately
|
339 |
+
under those permissions, but the entire Program remains governed by
|
340 |
+
this License without regard to the additional permissions.
|
341 |
+
|
342 |
+
When you convey a copy of a covered work, you may at your option
|
343 |
+
remove any additional permissions from that copy, or from any part of
|
344 |
+
it. (Additional permissions may be written to require their own
|
345 |
+
removal in certain cases when you modify the work.) You may place
|
346 |
+
additional permissions on material, added by you to a covered work,
|
347 |
+
for which you have or can give appropriate copyright permission.
|
348 |
+
|
349 |
+
Notwithstanding any other provision of this License, for material you
|
350 |
+
add to a covered work, you may (if authorized by the copyright holders of
|
351 |
+
that material) supplement the terms of this License with terms:
|
352 |
+
|
353 |
+
a) Disclaiming warranty or limiting liability differently from the
|
354 |
+
terms of sections 15 and 16 of this License; or
|
355 |
+
|
356 |
+
b) Requiring preservation of specified reasonable legal notices or
|
357 |
+
author attributions in that material or in the Appropriate Legal
|
358 |
+
Notices displayed by works containing it; or
|
359 |
+
|
360 |
+
c) Prohibiting misrepresentation of the origin of that material, or
|
361 |
+
requiring that modified versions of such material be marked in
|
362 |
+
reasonable ways as different from the original version; or
|
363 |
+
|
364 |
+
d) Limiting the use for publicity purposes of names of licensors or
|
365 |
+
authors of the material; or
|
366 |
+
|
367 |
+
e) Declining to grant rights under trademark law for use of some
|
368 |
+
trade names, trademarks, or service marks; or
|
369 |
+
|
370 |
+
f) Requiring indemnification of licensors and authors of that
|
371 |
+
material by anyone who conveys the material (or modified versions of
|
372 |
+
it) with contractual assumptions of liability to the recipient, for
|
373 |
+
any liability that these contractual assumptions directly impose on
|
374 |
+
those licensors and authors.
|
375 |
+
|
376 |
+
All other non-permissive additional terms are considered "further
|
377 |
+
restrictions" within the meaning of section 10. If the Program as you
|
378 |
+
received it, or any part of it, contains a notice stating that it is
|
379 |
+
governed by this License along with a term that is a further
|
380 |
+
restriction, you may remove that term. If a license document contains
|
381 |
+
a further restriction but permits relicensing or conveying under this
|
382 |
+
License, you may add to a covered work material governed by the terms
|
383 |
+
of that license document, provided that the further restriction does
|
384 |
+
not survive such relicensing or conveying.
|
385 |
+
|
386 |
+
If you add terms to a covered work in accord with this section, you
|
387 |
+
must place, in the relevant source files, a statement of the
|
388 |
+
additional terms that apply to those files, or a notice indicating
|
389 |
+
where to find the applicable terms.
|
390 |
+
|
391 |
+
Additional terms, permissive or non-permissive, may be stated in the
|
392 |
+
form of a separately written license, or stated as exceptions;
|
393 |
+
the above requirements apply either way.
|
394 |
+
|
395 |
+
8. Termination.
|
396 |
+
|
397 |
+
You may not propagate or modify a covered work except as expressly
|
398 |
+
provided under this License. Any attempt otherwise to propagate or
|
399 |
+
modify it is void, and will automatically terminate your rights under
|
400 |
+
this License (including any patent licenses granted under the third
|
401 |
+
paragraph of section 11).
|
402 |
+
|
403 |
+
However, if you cease all violation of this License, then your
|
404 |
+
license from a particular copyright holder is reinstated (a)
|
405 |
+
provisionally, unless and until the copyright holder explicitly and
|
406 |
+
finally terminates your license, and (b) permanently, if the copyright
|
407 |
+
holder fails to notify you of the violation by some reasonable means
|
408 |
+
prior to 60 days after the cessation.
|
409 |
+
|
410 |
+
Moreover, your license from a particular copyright holder is
|
411 |
+
reinstated permanently if the copyright holder notifies you of the
|
412 |
+
violation by some reasonable means, this is the first time you have
|
413 |
+
received notice of violation of this License (for any work) from that
|
414 |
+
copyright holder, and you cure the violation prior to 30 days after
|
415 |
+
your receipt of the notice.
|
416 |
+
|
417 |
+
Termination of your rights under this section does not terminate the
|
418 |
+
licenses of parties who have received copies or rights from you under
|
419 |
+
this License. If your rights have been terminated and not permanently
|
420 |
+
reinstated, you do not qualify to receive new licenses for the same
|
421 |
+
material under section 10.
|
422 |
+
|
423 |
+
9. Acceptance Not Required for Having Copies.
|
424 |
+
|
425 |
+
You are not required to accept this License in order to receive or
|
426 |
+
run a copy of the Program. Ancillary propagation of a covered work
|
427 |
+
occurring solely as a consequence of using peer-to-peer transmission
|
428 |
+
to receive a copy likewise does not require acceptance. However,
|
429 |
+
nothing other than this License grants you permission to propagate or
|
430 |
+
modify any covered work. These actions infringe copyright if you do
|
431 |
+
not accept this License. Therefore, by modifying or propagating a
|
432 |
+
covered work, you indicate your acceptance of this License to do so.
|
433 |
+
|
434 |
+
10. Automatic Licensing of Downstream Recipients.
|
435 |
+
|
436 |
+
Each time you convey a covered work, the recipient automatically
|
437 |
+
receives a license from the original licensors, to run, modify and
|
438 |
+
propagate that work, subject to this License. You are not responsible
|
439 |
+
for enforcing compliance by third parties with this License.
|
440 |
+
|
441 |
+
An "entity transaction" is a transaction transferring control of an
|
442 |
+
organization, or substantially all assets of one, or subdividing an
|
443 |
+
organization, or merging organizations. If propagation of a covered
|
444 |
+
work results from an entity transaction, each party to that
|
445 |
+
transaction who receives a copy of the work also receives whatever
|
446 |
+
licenses to the work the party's predecessor in interest had or could
|
447 |
+
give under the previous paragraph, plus a right to possession of the
|
448 |
+
Corresponding Source of the work from the predecessor in interest, if
|
449 |
+
the predecessor has it or can get it with reasonable efforts.
|
450 |
+
|
451 |
+
You may not impose any further restrictions on the exercise of the
|
452 |
+
rights granted or affirmed under this License. For example, you may
|
453 |
+
not impose a license fee, royalty, or other charge for exercise of
|
454 |
+
rights granted under this License, and you may not initiate litigation
|
455 |
+
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
456 |
+
any patent claim is infringed by making, using, selling, offering for
|
457 |
+
sale, or importing the Program or any portion of it.
|
458 |
+
|
459 |
+
11. Patents.
|
460 |
+
|
461 |
+
A "contributor" is a copyright holder who authorizes use under this
|
462 |
+
License of the Program or a work on which the Program is based. The
|
463 |
+
work thus licensed is called the contributor's "contributor version".
|
464 |
+
|
465 |
+
A contributor's "essential patent claims" are all patent claims
|
466 |
+
owned or controlled by the contributor, whether already acquired or
|
467 |
+
hereafter acquired, that would be infringed by some manner, permitted
|
468 |
+
by this License, of making, using, or selling its contributor version,
|
469 |
+
but do not include claims that would be infringed only as a
|
470 |
+
consequence of further modification of the contributor version. For
|
471 |
+
purposes of this definition, "control" includes the right to grant
|
472 |
+
patent sublicenses in a manner consistent with the requirements of
|
473 |
+
this License.
|
474 |
+
|
475 |
+
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
476 |
+
patent license under the contributor's essential patent claims, to
|
477 |
+
make, use, sell, offer for sale, import and otherwise run, modify and
|
478 |
+
propagate the contents of its contributor version.
|
479 |
+
|
480 |
+
In the following three paragraphs, a "patent license" is any express
|
481 |
+
agreement or commitment, however denominated, not to enforce a patent
|
482 |
+
(such as an express permission to practice a patent or covenant not to
|
483 |
+
sue for patent infringement). To "grant" such a patent license to a
|
484 |
+
party means to make such an agreement or commitment not to enforce a
|
485 |
+
patent against the party.
|
486 |
+
|
487 |
+
If you convey a covered work, knowingly relying on a patent license,
|
488 |
+
and the Corresponding Source of the work is not available for anyone
|
489 |
+
to copy, free of charge and under the terms of this License, through a
|
490 |
+
publicly available network server or other readily accessible means,
|
491 |
+
then you must either (1) cause the Corresponding Source to be so
|
492 |
+
available, or (2) arrange to deprive yourself of the benefit of the
|
493 |
+
patent license for this particular work, or (3) arrange, in a manner
|
494 |
+
consistent with the requirements of this License, to extend the patent
|
495 |
+
license to downstream recipients. "Knowingly relying" means you have
|
496 |
+
actual knowledge that, but for the patent license, your conveying the
|
497 |
+
covered work in a country, or your recipient's use of the covered work
|
498 |
+
in a country, would infringe one or more identifiable patents in that
|
499 |
+
country that you have reason to believe are valid.
|
500 |
+
|
501 |
+
If, pursuant to or in connection with a single transaction or
|
502 |
+
arrangement, you convey, or propagate by procuring conveyance of, a
|
503 |
+
covered work, and grant a patent license to some of the parties
|
504 |
+
receiving the covered work authorizing them to use, propagate, modify
|
505 |
+
or convey a specific copy of the covered work, then the patent license
|
506 |
+
you grant is automatically extended to all recipients of the covered
|
507 |
+
work and works based on it.
|
508 |
+
|
509 |
+
A patent license is "discriminatory" if it does not include within
|
510 |
+
the scope of its coverage, prohibits the exercise of, or is
|
511 |
+
conditioned on the non-exercise of one or more of the rights that are
|
512 |
+
specifically granted under this License. You may not convey a covered
|
513 |
+
work if you are a party to an arrangement with a third party that is
|
514 |
+
in the business of distributing software, under which you make payment
|
515 |
+
to the third party based on the extent of your activity of conveying
|
516 |
+
the work, and under which the third party grants, to any of the
|
517 |
+
parties who would receive the covered work from you, a discriminatory
|
518 |
+
patent license (a) in connection with copies of the covered work
|
519 |
+
conveyed by you (or copies made from those copies), or (b) primarily
|
520 |
+
for and in connection with specific products or compilations that
|
521 |
+
contain the covered work, unless you entered into that arrangement,
|
522 |
+
or that patent license was granted, prior to 28 March 2007.
|
523 |
+
|
524 |
+
Nothing in this License shall be construed as excluding or limiting
|
525 |
+
any implied license or other defenses to infringement that may
|
526 |
+
otherwise be available to you under applicable patent law.
|
527 |
+
|
528 |
+
12. No Surrender of Others' Freedom.
|
529 |
+
|
530 |
+
If conditions are imposed on you (whether by court order, agreement or
|
531 |
+
otherwise) that contradict the conditions of this License, they do not
|
532 |
+
excuse you from the conditions of this License. If you cannot convey a
|
533 |
+
covered work so as to satisfy simultaneously your obligations under this
|
534 |
+
License and any other pertinent obligations, then as a consequence you may
|
535 |
+
not convey it at all. For example, if you agree to terms that obligate you
|
536 |
+
to collect a royalty for further conveying from those to whom you convey
|
537 |
+
the Program, the only way you could satisfy both those terms and this
|
538 |
+
License would be to refrain entirely from conveying the Program.
|
539 |
+
|
540 |
+
13. Remote Network Interaction; Use with the GNU General Public License.
|
541 |
+
|
542 |
+
Notwithstanding any other provision of this License, if you modify the
|
543 |
+
Program, your modified version must prominently offer all users
|
544 |
+
interacting with it remotely through a computer network (if your version
|
545 |
+
supports such interaction) an opportunity to receive the Corresponding
|
546 |
+
Source of your version by providing access to the Corresponding Source
|
547 |
+
from a network server at no charge, through some standard or customary
|
548 |
+
means of facilitating copying of software. This Corresponding Source
|
549 |
+
shall include the Corresponding Source for any work covered by version 3
|
550 |
+
of the GNU General Public License that is incorporated pursuant to the
|
551 |
+
following paragraph.
|
552 |
+
|
553 |
+
Notwithstanding any other provision of this License, you have
|
554 |
+
permission to link or combine any covered work with a work licensed
|
555 |
+
under version 3 of the GNU General Public License into a single
|
556 |
+
combined work, and to convey the resulting work. The terms of this
|
557 |
+
License will continue to apply to the part which is the covered work,
|
558 |
+
but the work with which it is combined will remain governed by version
|
559 |
+
3 of the GNU General Public License.
|
560 |
+
|
561 |
+
14. Revised Versions of this License.
|
562 |
+
|
563 |
+
The Free Software Foundation may publish revised and/or new versions of
|
564 |
+
the GNU Affero General Public License from time to time. Such new versions
|
565 |
+
will be similar in spirit to the present version, but may differ in detail to
|
566 |
+
address new problems or concerns.
|
567 |
+
|
568 |
+
Each version is given a distinguishing version number. If the
|
569 |
+
Program specifies that a certain numbered version of the GNU Affero General
|
570 |
+
Public License "or any later version" applies to it, you have the
|
571 |
+
option of following the terms and conditions either of that numbered
|
572 |
+
version or of any later version published by the Free Software
|
573 |
+
Foundation. If the Program does not specify a version number of the
|
574 |
+
GNU Affero General Public License, you may choose any version ever published
|
575 |
+
by the Free Software Foundation.
|
576 |
+
|
577 |
+
If the Program specifies that a proxy can decide which future
|
578 |
+
versions of the GNU Affero General Public License can be used, that proxy's
|
579 |
+
public statement of acceptance of a version permanently authorizes you
|
580 |
+
to choose that version for the Program.
|
581 |
+
|
582 |
+
Later license versions may give you additional or different
|
583 |
+
permissions. However, no additional obligations are imposed on any
|
584 |
+
author or copyright holder as a result of your choosing to follow a
|
585 |
+
later version.
|
586 |
+
|
587 |
+
15. Disclaimer of Warranty.
|
588 |
+
|
589 |
+
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
590 |
+
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
591 |
+
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
592 |
+
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
593 |
+
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
594 |
+
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
595 |
+
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
596 |
+
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
597 |
+
|
598 |
+
16. Limitation of Liability.
|
599 |
+
|
600 |
+
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
601 |
+
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
602 |
+
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
603 |
+
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
604 |
+
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
605 |
+
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
606 |
+
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
607 |
+
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
608 |
+
SUCH DAMAGES.
|
609 |
+
|
610 |
+
17. Interpretation of Sections 15 and 16.
|
611 |
+
|
612 |
+
If the disclaimer of warranty and limitation of liability provided
|
613 |
+
above cannot be given local legal effect according to their terms,
|
614 |
+
reviewing courts shall apply local law that most closely approximates
|
615 |
+
an absolute waiver of all civil liability in connection with the
|
616 |
+
Program, unless a warranty or assumption of liability accompanies a
|
617 |
+
copy of the Program in return for a fee.
|
618 |
+
|
619 |
+
END OF TERMS AND CONDITIONS
|
620 |
+
|
621 |
+
How to Apply These Terms to Your New Programs
|
622 |
+
|
623 |
+
If you develop a new program, and you want it to be of the greatest
|
624 |
+
possible use to the public, the best way to achieve this is to make it
|
625 |
+
free software which everyone can redistribute and change under these terms.
|
626 |
+
|
627 |
+
To do so, attach the following notices to the program. It is safest
|
628 |
+
to attach them to the start of each source file to most effectively
|
629 |
+
state the exclusion of warranty; and each file should have at least
|
630 |
+
the "copyright" line and a pointer to where the full notice is found.
|
631 |
+
|
632 |
+
<one line to give the program's name and a brief idea of what it does.>
|
633 |
+
Copyright (C) <year> <name of author>
|
634 |
+
|
635 |
+
This program is free software: you can redistribute it and/or modify
|
636 |
+
it under the terms of the GNU Affero General Public License as published by
|
637 |
+
the Free Software Foundation, either version 3 of the License, or
|
638 |
+
(at your option) any later version.
|
639 |
+
|
640 |
+
This program is distributed in the hope that it will be useful,
|
641 |
+
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
642 |
+
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
643 |
+
GNU Affero General Public License for more details.
|
644 |
+
|
645 |
+
You should have received a copy of the GNU Affero General Public License
|
646 |
+
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
647 |
+
|
648 |
+
Also add information on how to contact you by electronic and paper mail.
|
649 |
+
|
650 |
+
If your software can interact with users remotely through a computer
|
651 |
+
network, you should also make sure that it provides a way for users to
|
652 |
+
get its source. For example, if your program is a web application, its
|
653 |
+
interface could display a "Source" link that leads users to an archive
|
654 |
+
of the code. There are many ways you could offer source, and different
|
655 |
+
solutions will be better for different programs; see section 13 for the
|
656 |
+
specific requirements.
|
657 |
+
|
658 |
+
You should also get your employer (if you work as a programmer) or school,
|
659 |
+
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
660 |
+
For more information on this, and how to apply and follow the GNU AGPL, see
|
661 |
+
<https://www.gnu.org/licenses/>.
|
README.md
CHANGED
@@ -1,3 +1,278 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<p>
|
3 |
+
<a href="https://www.ultralytics.com/events/yolovision" target="_blank">
|
4 |
+
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="YOLO Vision banner"></a>
|
5 |
+
</p>
|
6 |
+
|
7 |
+
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar) <br>
|
8 |
+
|
9 |
+
<div>
|
10 |
+
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
|
11 |
+
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
|
12 |
+
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Ultralytics Docker Pulls"></a>
|
13 |
+
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
|
14 |
+
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
|
15 |
+
<a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
|
16 |
+
<br>
|
17 |
+
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run Ultralytics on Gradient"></a>
|
18 |
+
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open Ultralytics In Colab"></a>
|
19 |
+
<a href="https://www.kaggle.com/ultralytics/yolov8"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open Ultralytics In Kaggle"></a>
|
20 |
+
</div>
|
21 |
+
<br>
|
22 |
+
|
23 |
+
[Ultralytics](https://www.ultralytics.com/) [YOLO11](https://github.com/ultralytics/ultralytics) is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLO11 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
|
24 |
+
|
25 |
+
We hope that the resources here will help you get the most out of YOLO. Please browse the Ultralytics <a href="https://docs.ultralytics.com/">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/ultralytics/issues/new/choose">GitHub</a> for support, questions, or discussions, become a member of the Ultralytics <a href="https://discord.com/invite/ultralytics">Discord</a>, <a href="https://reddit.com/r/ultralytics">Reddit</a> and <a href="https://community.ultralytics.com/">Forums</a>!
|
26 |
+
|
27 |
+
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license).
|
28 |
+
|
29 |
+
<img width="100%" src="https://github.com/user-attachments/assets/a311a4ed-bbf2-43b5-8012-5f183a28a845" alt="YOLO11 performance plots"></a>
|
30 |
+
|
31 |
+
<div align="center">
|
32 |
+
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
|
33 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
34 |
+
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="Ultralytics LinkedIn"></a>
|
35 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
36 |
+
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="Ultralytics Twitter"></a>
|
37 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
38 |
+
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="Ultralytics YouTube"></a>
|
39 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
40 |
+
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="Ultralytics TikTok"></a>
|
41 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
42 |
+
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="2%" alt="Ultralytics BiliBili"></a>
|
43 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
44 |
+
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="2%" alt="Ultralytics Discord"></a>
|
45 |
+
</div>
|
46 |
+
</div>
|
47 |
+
|
48 |
+
## <div align="center">Documentation</div>
|
49 |
+
|
50 |
+
See below for a quickstart install and usage examples, and see our [Docs](https://docs.ultralytics.com/) for full documentation on training, validation, prediction and deployment.
|
51 |
+
|
52 |
+
<details open>
|
53 |
+
<summary>Install</summary>
|
54 |
+
|
55 |
+
Pip install the ultralytics package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) in a [**Python>=3.8**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
|
56 |
+
|
57 |
+
[](https://pypi.org/project/ultralytics/) [](https://pepy.tech/project/ultralytics) [](https://pypi.org/project/ultralytics/)
|
58 |
+
|
59 |
+
```bash
|
60 |
+
pip install ultralytics
|
61 |
+
```
|
62 |
+
|
63 |
+
For alternative installation methods including [Conda](https://anaconda.org/conda-forge/ultralytics), [Docker](https://hub.docker.com/r/ultralytics/ultralytics), and Git, please refer to the [Quickstart Guide](https://docs.ultralytics.com/quickstart/).
|
64 |
+
|
65 |
+
[](https://anaconda.org/conda-forge/ultralytics) [](https://hub.docker.com/r/ultralytics/ultralytics)
|
66 |
+
|
67 |
+
</details>
|
68 |
+
|
69 |
+
<details open>
|
70 |
+
<summary>Usage</summary>
|
71 |
+
|
72 |
+
### CLI
|
73 |
+
|
74 |
+
YOLO may be used directly in the Command Line Interface (CLI) with a `yolo` command:
|
75 |
+
|
76 |
+
```bash
|
77 |
+
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
|
78 |
+
```
|
79 |
+
|
80 |
+
`yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See the YOLO [CLI Docs](https://docs.ultralytics.com/usage/cli/) for examples.
|
81 |
+
|
82 |
+
### Python
|
83 |
+
|
84 |
+
YOLO may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/usage/cfg/) as in the CLI example above:
|
85 |
+
|
86 |
+
```python
|
87 |
+
from ultralytics import YOLO
|
88 |
+
|
89 |
+
# Load a model
|
90 |
+
model = YOLO("yolo11n.pt")
|
91 |
+
|
92 |
+
# Train the model
|
93 |
+
train_results = model.train(
|
94 |
+
data="coco8.yaml", # path to dataset YAML
|
95 |
+
epochs=100, # number of training epochs
|
96 |
+
imgsz=640, # training image size
|
97 |
+
device="cpu", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu
|
98 |
+
)
|
99 |
+
|
100 |
+
# Evaluate model performance on the validation set
|
101 |
+
metrics = model.val()
|
102 |
+
|
103 |
+
# Perform object detection on an image
|
104 |
+
results = model("path/to/image.jpg")
|
105 |
+
results[0].show()
|
106 |
+
|
107 |
+
# Export the model to ONNX format
|
108 |
+
path = model.export(format="onnx") # return path to exported model
|
109 |
+
```
|
110 |
+
|
111 |
+
See YOLO [Python Docs](https://docs.ultralytics.com/usage/python/) for more examples.
|
112 |
+
|
113 |
+
</details>
|
114 |
+
|
115 |
+
## <div align="center">Models</div>
|
116 |
+
|
117 |
+
YOLO11 [Detect](https://docs.ultralytics.com/tasks/detect/), [Segment](https://docs.ultralytics.com/tasks/segment/) and [Pose](https://docs.ultralytics.com/tasks/pose/) models pretrained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset are available here, as well as YOLO11 [Classify](https://docs.ultralytics.com/tasks/classify/) models pretrained on the [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) dataset. [Track](https://docs.ultralytics.com/modes/track/) mode is available for all Detect, Segment and Pose models.
|
118 |
+
|
119 |
+
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
|
120 |
+
|
121 |
+
All [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
|
122 |
+
|
123 |
+
<details open><summary>Detection (COCO)</summary>
|
124 |
+
|
125 |
+
See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/detect/coco/), which include 80 pre-trained classes.
|
126 |
+
|
127 |
+
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
128 |
+
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
129 |
+
| [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt) | 640 | 39.5 | 56.1 ± 0.8 | 1.5 ± 0.0 | 2.6 | 6.5 |
|
130 |
+
| [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt) | 640 | 47.0 | 90.0 ± 1.2 | 2.5 ± 0.0 | 9.4 | 21.5 |
|
131 |
+
| [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt) | 640 | 51.5 | 183.2 ± 2.0 | 4.7 ± 0.1 | 20.1 | 68.0 |
|
132 |
+
| [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt) | 640 | 53.4 | 238.6 ± 1.4 | 6.2 ± 0.1 | 25.3 | 86.9 |
|
133 |
+
| [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt) | 640 | 54.7 | 462.8 ± 6.7 | 11.3 ± 0.2 | 56.9 | 194.9 |
|
134 |
+
|
135 |
+
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val detect data=coco.yaml device=0`
|
136 |
+
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val detect data=coco.yaml batch=1 device=0|cpu`
|
137 |
+
|
138 |
+
</details>
|
139 |
+
|
140 |
+
<details><summary>Segmentation (COCO)</summary>
|
141 |
+
|
142 |
+
See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples with these models trained on [COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/), which include 80 pre-trained classes.
|
143 |
+
|
144 |
+
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
145 |
+
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
146 |
+
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 38.9 | 32.0 | 65.9 ± 1.1 | 1.8 ± 0.0 | 2.9 | 10.4 |
|
147 |
+
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 46.6 | 37.8 | 117.6 ± 4.9 | 2.9 ± 0.0 | 10.1 | 35.5 |
|
148 |
+
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 51.5 | 41.5 | 281.6 ± 1.2 | 6.3 ± 0.1 | 22.4 | 123.3 |
|
149 |
+
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
|
150 |
+
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
|
151 |
+
|
152 |
+
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco-seg.yaml device=0`
|
153 |
+
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
|
154 |
+
|
155 |
+
</details>
|
156 |
+
|
157 |
+
<details><summary>Classification (ImageNet)</summary>
|
158 |
+
|
159 |
+
See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usage examples with these models trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/), which include 1000 pretrained classes.
|
160 |
+
|
161 |
+
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
|
162 |
+
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
|
163 |
+
| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.0 ± 0.3 | 1.1 ± 0.0 | 1.6 | 3.3 |
|
164 |
+
| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.9 ± 0.2 | 1.3 ± 0.0 | 5.5 | 12.1 |
|
165 |
+
| [YOLO11m-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-cls.pt) | 224 | 77.3 | 93.9 | 17.2 ± 0.4 | 2.0 ± 0.0 | 10.4 | 39.3 |
|
166 |
+
| [YOLO11l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-cls.pt) | 224 | 78.3 | 94.3 | 23.2 ± 0.3 | 2.8 ± 0.0 | 12.9 | 49.4 |
|
167 |
+
| [YOLO11x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-cls.pt) | 224 | 79.5 | 94.9 | 41.4 ± 0.9 | 3.8 ± 0.0 | 28.4 | 110.4 |
|
168 |
+
|
169 |
+
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set. <br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
|
170 |
+
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
|
171 |
+
|
172 |
+
</details>
|
173 |
+
|
174 |
+
<details><summary>Pose (COCO)</summary>
|
175 |
+
|
176 |
+
See [Pose Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples with these models trained on [COCO-Pose](https://docs.ultralytics.com/datasets/pose/coco/), which include 1 pre-trained class, person.
|
177 |
+
|
178 |
+
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
179 |
+
| ---------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
180 |
+
| [YOLO11n-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-pose.pt) | 640 | 50.0 | 81.0 | 52.4 ± 0.5 | 1.7 ± 0.0 | 2.9 | 7.6 |
|
181 |
+
| [YOLO11s-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-pose.pt) | 640 | 58.9 | 86.3 | 90.5 ± 0.6 | 2.6 ± 0.0 | 9.9 | 23.2 |
|
182 |
+
| [YOLO11m-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-pose.pt) | 640 | 64.9 | 89.4 | 187.3 ± 0.8 | 4.9 ± 0.1 | 20.9 | 71.7 |
|
183 |
+
| [YOLO11l-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-pose.pt) | 640 | 66.1 | 89.9 | 247.7 ± 1.1 | 6.4 ± 0.1 | 26.2 | 90.7 |
|
184 |
+
| [YOLO11x-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-pose.pt) | 640 | 69.5 | 91.1 | 488.0 ± 13.9 | 12.1 ± 0.2 | 58.8 | 203.3 |
|
185 |
+
|
186 |
+
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO Keypoints val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val pose data=coco-pose.yaml device=0`
|
187 |
+
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val pose data=coco-pose.yaml batch=1 device=0|cpu`
|
188 |
+
|
189 |
+
</details>
|
190 |
+
|
191 |
+
<details><summary>OBB (DOTAv1)</summary>
|
192 |
+
|
193 |
+
See [OBB Docs](https://docs.ultralytics.com/tasks/obb/) for usage examples with these models trained on [DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10/), which include 15 pre-trained classes.
|
194 |
+
|
195 |
+
| Model | size<br><sup>(pixels) | mAP<sup>test<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
196 |
+
| -------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
197 |
+
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.6 ± 0.8 | 4.4 ± 0.0 | 2.7 | 17.2 |
|
198 |
+
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.4 ± 4.0 | 5.1 ± 0.0 | 9.7 | 57.5 |
|
199 |
+
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.8 ± 2.9 | 10.1 ± 0.4 | 20.9 | 183.5 |
|
200 |
+
| [YOLO11l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-obb.pt) | 1024 | 81.0 | 712.5 ± 5.0 | 13.5 ± 0.6 | 26.2 | 232.0 |
|
201 |
+
| [YOLO11x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-obb.pt) | 1024 | 81.3 | 1408.6 ± 7.7 | 28.6 ± 1.0 | 58.8 | 520.2 |
|
202 |
+
|
203 |
+
- **mAP<sup>test</sup>** values are for single-model multiscale on [DOTAv1](https://captain-whu.github.io/DOTA/index.html) dataset. <br>Reproduce by `yolo val obb data=DOTAv1.yaml device=0 split=test` and submit merged results to [DOTA evaluation](https://captain-whu.github.io/DOTA/evaluation.html).
|
204 |
+
- **Speed** averaged over DOTAv1 val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
|
205 |
+
|
206 |
+
</details>
|
207 |
+
|
208 |
+
## <div align="center">Integrations</div>
|
209 |
+
|
210 |
+
Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with [Roboflow](https://roboflow.com/?ref=ultralytics), ClearML, [Comet](https://bit.ly/yolov8-readme-comet), Neural Magic and [OpenVINO](https://docs.ultralytics.com/integrations/openvino/), can optimize your AI workflow.
|
211 |
+
|
212 |
+
<br>
|
213 |
+
<a href="https://www.ultralytics.com/hub" target="_blank">
|
214 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics active learning integrations"></a>
|
215 |
+
<br>
|
216 |
+
<br>
|
217 |
+
|
218 |
+
<div align="center">
|
219 |
+
<a href="https://roboflow.com/?ref=ultralytics">
|
220 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-roboflow.png" width="10%" alt="Roboflow logo"></a>
|
221 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
222 |
+
<a href="https://clear.ml/">
|
223 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-clearml.png" width="10%" alt="ClearML logo"></a>
|
224 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
225 |
+
<a href="https://bit.ly/yolov8-readme-comet">
|
226 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" alt="Comet ML logo"></a>
|
227 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
228 |
+
<a href="https://bit.ly/yolov5-neuralmagic">
|
229 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="NeuralMagic logo"></a>
|
230 |
+
</div>
|
231 |
+
|
232 |
+
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
|
233 |
+
| :--------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
|
234 |
+
| Label and export your custom datasets directly to YOLO11 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLO11 using [ClearML](https://clear.ml/) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLO11 models, resume training, and interactively visualize and debug predictions | Run YOLO11 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
|
235 |
+
|
236 |
+
## <div align="center">Ultralytics HUB</div>
|
237 |
+
|
238 |
+
Experience seamless AI with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the all-in-one solution for data visualization, YOLO11 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** now!
|
239 |
+
|
240 |
+
<a href="https://www.ultralytics.com/hub" target="_blank">
|
241 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB preview image"></a>
|
242 |
+
|
243 |
+
## <div align="center">Contribute</div>
|
244 |
+
|
245 |
+
We love your input! Ultralytics YOLO would not be possible without help from our community. Please see our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) to get started, and fill out our [Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors!
|
246 |
+
|
247 |
+
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
|
248 |
+
|
249 |
+
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
|
250 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a>
|
251 |
+
|
252 |
+
## <div align="center">License</div>
|
253 |
+
|
254 |
+
Ultralytics offers two licensing options to accommodate diverse use cases:
|
255 |
+
|
256 |
+
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license) open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for more details.
|
257 |
+
- **Enterprise License**: Designed for commercial use, this license permits seamless integration of Ultralytics software and AI models into commercial goods and services, bypassing the open-source requirements of AGPL-3.0. If your scenario involves embedding our solutions into a commercial offering, reach out through [Ultralytics Licensing](https://www.ultralytics.com/license).
|
258 |
+
|
259 |
+
## <div align="center">Contact</div>
|
260 |
+
|
261 |
+
For Ultralytics bug reports and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). Become a member of the Ultralytics [Discord](https://discord.com/invite/ultralytics), [Reddit](https://www.reddit.com/r/ultralytics/), or [Forums](https://community.ultralytics.com/) for asking questions, sharing projects, learning discussions, or for help with all things Ultralytics!
|
262 |
+
|
263 |
+
<br>
|
264 |
+
<div align="center">
|
265 |
+
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
|
266 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
267 |
+
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
|
268 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
269 |
+
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
|
270 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
271 |
+
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
|
272 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
273 |
+
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
|
274 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
275 |
+
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
|
276 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
277 |
+
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
|
278 |
+
</div>
|
README.zh-CN.md
ADDED
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<p>
|
3 |
+
<a href="https://www.ultralytics.com/events/yolovision" target="_blank">
|
4 |
+
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="YOLO Vision banner"></a>
|
5 |
+
</p>
|
6 |
+
|
7 |
+
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar) <br>
|
8 |
+
|
9 |
+
<div>
|
10 |
+
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
|
11 |
+
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
|
12 |
+
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Ultralytics Docker Pulls"></a>
|
13 |
+
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
|
14 |
+
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
|
15 |
+
<a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
|
16 |
+
<br>
|
17 |
+
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run Ultralytics on Gradient"></a>
|
18 |
+
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open Ultralytics In Colab"></a>
|
19 |
+
<a href="https://www.kaggle.com/ultralytics/yolov8"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open Ultralytics In Kaggle"></a>
|
20 |
+
</div>
|
21 |
+
<br>
|
22 |
+
|
23 |
+
[Ultralytics](https://www.ultralytics.com/) [YOLO11](https://github.com/ultralytics/ultralytics) 是一个尖端的、最先进(SOTA)的模型,基于之前 YOLO 版本的成功,并引入了新功能和改进以进一步提升性能和灵活性。YOLO11 被设计得快速、准确且易于使用,是进行广泛对象检测和跟踪、实例分割、图像分类和姿态估计任务的理想选择。
|
24 |
+
|
25 |
+
我们希望这里的资源能帮助你充分利用 YOLO。请浏览 Ultralytics <a href="https://docs.ultralytics.com/">文档</a> 以获取详细信息,在 <a href="https://github.com/ultralytics/ultralytics/issues/new/choose">GitHub</a> 上提出问题或讨论,成为 Ultralytics <a href="https://discord.com/invite/ultralytics">Discord</a>、<a href="https://reddit.com/r/ultralytics">Reddit</a> 和 <a href="https://community.ultralytics.com/">论坛</a> 的成员!
|
26 |
+
|
27 |
+
想申请企业许可证,请完成 [Ultralytics Licensing](https://www.ultralytics.com/license) 上的表单。
|
28 |
+
|
29 |
+
<img width="100%" src="https://github.com/user-attachments/assets/a311a4ed-bbf2-43b5-8012-5f183a28a845" alt="YOLO11 performance plots"></a>
|
30 |
+
|
31 |
+
<div align="center">
|
32 |
+
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
|
33 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
34 |
+
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="Ultralytics LinkedIn"></a>
|
35 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
36 |
+
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="Ultralytics Twitter"></a>
|
37 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
38 |
+
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="Ultralytics YouTube"></a>
|
39 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
40 |
+
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="Ultralytics TikTok"></a>
|
41 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
42 |
+
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="2%" alt="Ultralytics BiliBili"></a>
|
43 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
44 |
+
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="2%" alt="Ultralytics Discord"></a>
|
45 |
+
</div>
|
46 |
+
</div>
|
47 |
+
|
48 |
+
## <div align="center">文档</div>
|
49 |
+
|
50 |
+
请参阅下方的快速开始安装和使用示例,并查看我们的 [文档](https://docs.ultralytics.com/) 以获取有关训练、验证、预测和部署的完整文档。
|
51 |
+
|
52 |
+
<details open>
|
53 |
+
<summary>安装</summary>
|
54 |
+
|
55 |
+
在 [**Python>=3.8**](https://www.python.org/) 环境中使用 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) 通过 pip 安装包含所有[依赖项](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) 的 ultralytics 包。
|
56 |
+
|
57 |
+
[](https://pypi.org/project/ultralytics/) [](https://pepy.tech/project/ultralytics) [](https://pypi.org/project/ultralytics/)
|
58 |
+
|
59 |
+
```bash
|
60 |
+
pip install ultralytics
|
61 |
+
```
|
62 |
+
|
63 |
+
有关其他安装方法,包括 [Conda](https://anaconda.org/conda-forge/ultralytics)、[Docker](https://hub.docker.com/r/ultralytics/ultralytics) 和 Git,请参阅 [快速开始指南](https://docs.ultralytics.com/quickstart/)。
|
64 |
+
|
65 |
+
[](https://anaconda.org/conda-forge/ultralytics) [](https://hub.docker.com/r/ultralytics/ultralytics)
|
66 |
+
|
67 |
+
</details>
|
68 |
+
|
69 |
+
<details open>
|
70 |
+
<summary>使用</summary>
|
71 |
+
|
72 |
+
### CLI
|
73 |
+
|
74 |
+
YOLO 可以直接在命令行接口(CLI)中使用 `yolo` 命令:
|
75 |
+
|
76 |
+
```bash
|
77 |
+
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
|
78 |
+
```
|
79 |
+
|
80 |
+
`yolo` 可以用于各种任务和模式,并接受额外参数,例如 `imgsz=640`。请参阅 YOLO [CLI 文档](https://docs.ultralytics.com/usage/cli/) 以获取示例。
|
81 |
+
|
82 |
+
### Python
|
83 |
+
|
84 |
+
YOLO 也可以直接在 Python 环境中使用,并接受与上述 CLI 示例中相同的[参数](https://docs.ultralytics.com/usage/cfg/):
|
85 |
+
|
86 |
+
```python
|
87 |
+
from ultralytics import YOLO
|
88 |
+
|
89 |
+
# 加载模型
|
90 |
+
model = YOLO("yolo11n.pt")
|
91 |
+
|
92 |
+
# 训练模型
|
93 |
+
train_results = model.train(
|
94 |
+
data="coco8.yaml", # 数据集 YAML 路径
|
95 |
+
epochs=100, # 训练轮次
|
96 |
+
imgsz=640, # 训练图像尺寸
|
97 |
+
device="cpu", # 运行设备,例如 device=0 或 device=0,1,2,3 或 device=cpu
|
98 |
+
)
|
99 |
+
|
100 |
+
# 评估模型在验证集上的性能
|
101 |
+
metrics = model.val()
|
102 |
+
|
103 |
+
# 在图像上执行对象检测
|
104 |
+
results = model("path/to/image.jpg")
|
105 |
+
results[0].show()
|
106 |
+
|
107 |
+
# 将模型导出为 ONNX 格式
|
108 |
+
path = model.export(format="onnx") # 返回导出模型的路径
|
109 |
+
```
|
110 |
+
|
111 |
+
请参阅 YOLO [Python 文档](https://docs.ultralytics.com/usage/python/) 以获取更多示例。
|
112 |
+
|
113 |
+
</details>
|
114 |
+
|
115 |
+
## <div align="center">模型</div>
|
116 |
+
|
117 |
+
YOLO11 [检测](https://docs.ultralytics.com/tasks/detect/)、[分割](https://docs.ultralytics.com/tasks/segment/) 和 [姿态](https://docs.ultralytics.com/tasks/pose/) 模型在 [COCO](https://docs.ultralytics.com/datasets/detect/coco/) 数据集上进行预训练,这些模型可在此处获得,此外还有在 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 数据集上预训练的 YOLO11 [分类](https://docs.ultralytics.com/tasks/classify/) 模型。所有检测、分割和姿态模型均支持 [跟踪](https://docs.ultralytics.com/modes/track/) 模式。
|
118 |
+
|
119 |
+
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
|
120 |
+
|
121 |
+
所有[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models)在首次使用时自动从最新的 Ultralytics [发布](https://github.com/ultralytics/assets/releases)下载。
|
122 |
+
|
123 |
+
<details open><summary>检测 (COCO)</summary>
|
124 |
+
|
125 |
+
请参阅 [检测文档](https://docs.ultralytics.com/tasks/detect/) 以获取使用这些在 [COCO](https://docs.ultralytics.com/datasets/detect/coco/) 数据集上训练的模型的示例,其中包含 80 个预训练类别。
|
126 |
+
|
127 |
+
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>val<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
|
128 |
+
| ------------------------------------------------------------------------------------ | ------------------- | -------------------- | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
|
129 |
+
| [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt) | 640 | 39.5 | 56.1 ± 0.8 | 1.5 ± 0.0 | 2.6 | 6.5 |
|
130 |
+
| [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt) | 640 | 47.0 | 90.0 ± 1.2 | 2.5 ± 0.0 | 9.4 | 21.5 |
|
131 |
+
| [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt) | 640 | 51.5 | 183.2 ± 2.0 | 4.7 ± 0.1 | 20.1 | 68.0 |
|
132 |
+
| [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt) | 640 | 53.4 | 238.6 ± 1.4 | 6.2 ± 0.1 | 25.3 | 86.9 |
|
133 |
+
| [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt) | 640 | 54.7 | 462.8 ± 6.7 | 11.3 ± 0.2 | 56.9 | 194.9 |
|
134 |
+
|
135 |
+
- **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val detect data=coco.yaml device=0`
|
136 |
+
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val detect data=coco.yaml batch=1 device=0|cpu`
|
137 |
+
|
138 |
+
</details>
|
139 |
+
|
140 |
+
<details><summary>分割 (COCO)</summary>
|
141 |
+
|
142 |
+
请参阅 [分割文档](https://docs.ultralytics.com/tasks/segment/) 以获取使用这些在 [COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/) 数据集上训练的模型的示例,其中包含 80 个预训练类别。
|
143 |
+
|
144 |
+
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
|
145 |
+
| -------------------------------------------------------------------------------------------- | ------------------- | -------------------- | --------------------- | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
|
146 |
+
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 38.9 | 32.0 | 65.9 ± 1.1 | 1.8 ± 0.0 | 2.9 | 10.4 |
|
147 |
+
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 46.6 | 37.8 | 117.6 ± 4.9 | 2.9 ± 0.0 | 10.1 | 35.5 |
|
148 |
+
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 51.5 | 41.5 | 281.6 ± 1.2 | 6.3 ± 0.1 | 22.4 | 123.3 |
|
149 |
+
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
|
150 |
+
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
|
151 |
+
|
152 |
+
- **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val segment data=coco-seg.yaml device=0`
|
153 |
+
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
|
154 |
+
|
155 |
+
</details>
|
156 |
+
|
157 |
+
<details><summary>分类 (ImageNet)</summary>
|
158 |
+
|
159 |
+
请参阅 [分类文档](https://docs.ultralytics.com/tasks/classify/) 以获取使用这些在 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 数据集上训练的模型的示例,其中包含 1000 个预训练类别。
|
160 |
+
|
161 |
+
| 模型 | 尺寸<br><sup>(像素) | acc<br><sup>top1 | acc<br><sup>top5 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
|
162 |
+
| -------------------------------------------------------------------------------------------- | ------------------- | ---------------- | ---------------- | ----------------------------- | ---------------------------------- | ---------------- | ------------------------ |
|
163 |
+
| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.0 ± 0.3 | 1.1 ± 0.0 | 1.6 | 3.3 |
|
164 |
+
| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.9 ± 0.2 | 1.3 ± 0.0 | 5.5 | 12.1 |
|
165 |
+
| [YOLO11m-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-cls.pt) | 224 | 77.3 | 93.9 | 17.2 ± 0.4 | 2.0 ± 0.0 | 10.4 | 39.3 |
|
166 |
+
| [YOLO11l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-cls.pt) | 224 | 78.3 | 94.3 | 23.2 ± 0.3 | 2.8 ± 0.0 | 12.9 | 49.4 |
|
167 |
+
| [YOLO11x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-cls.pt) | 224 | 79.5 | 94.9 | 41.4 ± 0.9 | 3.8 ± 0.0 | 28.4 | 110.4 |
|
168 |
+
|
169 |
+
- **acc** 值为在 [ImageNet](https://www.image-net.org/) 数据集验证集上的模型准确率。 <br>复制命令 `yolo val classify data=path/to/ImageNet device=0`
|
170 |
+
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 ImageNet 验证图像上平均。 <br>复制命令 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
|
171 |
+
|
172 |
+
</details>
|
173 |
+
|
174 |
+
<details><summary>姿态 (COCO)</summary>
|
175 |
+
|
176 |
+
请参阅 [姿态文档](https://docs.ultralytics.com/tasks/pose/) 以获取使用这些在 [COCO-Pose](https://docs.ultralytics.com/datasets/pose/coco/) 数据集上训练的模型的示例,其中包含 1 个预训练类别(人)。
|
177 |
+
|
178 |
+
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
|
179 |
+
| -------------------------------------------------------------------------------------------- | ------------------- | --------------------- | ------------------ | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
|
180 |
+
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.6 ± 0.8 | 4.4 ± 0.0 | 2.7 | 17.2 |
|
181 |
+
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.4 ± 4.0 | 5.1 ± 0.0 | 9.7 | 57.5 |
|
182 |
+
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.8 ± 2.9 | 10.1 ± 0.4 | 20.9 | 183.5 |
|
183 |
+
| [YOLO11l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-obb.pt) | 1024 | 81.0 | 712.5 ± 5.0 | 13.5 ± 0.6 | 26.2 | 232.0 |
|
184 |
+
| [YOLO11x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-obb.pt) | 1024 | 81.3 | 1408.6 ± 7.7 | 28.6 ± 1.0 | 58.8 | 520.2 |
|
185 |
+
|
186 |
+
- **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO Keypoints val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val pose data=coco-pose.yaml device=0`
|
187 |
+
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val pose data=coco-pose.yaml batch=1 device=0|cpu`
|
188 |
+
|
189 |
+
</details>
|
190 |
+
|
191 |
+
<details><summary>OBB (DOTAv1)</summary>
|
192 |
+
|
193 |
+
请参阅 [OBB 文档](https://docs.ultralytics.com/tasks/obb/) 以获取使用这些在 [DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10/) 数据集上训练的模型的示例,其中包含 15 个预训练类别。
|
194 |
+
|
195 |
+
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>test<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
|
196 |
+
| -------------------------------------------------------------------------------------------- | ------------------- | ------------------ | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
|
197 |
+
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.56 ± 0.80 | 4.43 ± 0.01 | 2.7 | 17.2 |
|
198 |
+
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.41 ± 4.00 | 5.13 ± 0.02 | 9.7 | 57.5 |
|
199 |
+
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.81 ± 2.87 | 10.07 ± 0.38 | 20.9 | 183.5 |
|
200 |
+
| [YOLO11l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-obb.pt) | 1024 | 81.0 | 712.49 ± 4.98 | 13.46 ± 0.55 | 26.2 | 232.0 |
|
201 |
+
| [YOLO11x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-obb.pt) | 1024 | 81.3 | 1408.63 ± 7.67 | 28.59 ± 0.96 | 58.8 | 520.2 |
|
202 |
+
|
203 |
+
- **mAP<sup>test</sup>** 值针对单模型多尺度在 [DOTAv1](https://captain-whu.github.io/DOTA/index.html) 数据集上进行。 <br>复制命令 `yolo val obb data=DOTAv1.yaml device=0 split=test` 并提交合并结果到 [DOTA 评估](https://captain-whu.github.io/DOTA/evaluation.html)。
|
204 |
+
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 DOTAv1 验证图像上平均。 <br>复制命令 `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
|
205 |
+
|
206 |
+
</details>
|
207 |
+
|
208 |
+
## <div align="center">集成</div>
|
209 |
+
|
210 |
+
我们与领先的 AI 平台的关键集成扩展了 Ultralytics 产品的功能,增强了数据集标记、训练、可视化和模型管理等任务的能力。了解 Ultralytics 如何与 [Roboflow](https://roboflow.com/?ref=ultralytics)、ClearML、[Comet](https://bit.ly/yolov8-readme-comet)、Neural Magic 和 [OpenVINO](https://docs.ultralytics.com/integrations/openvino/) 合作,优化您的 AI 工作流程。
|
211 |
+
|
212 |
+
<br>
|
213 |
+
<a href="https://www.ultralytics.com/hub" target="_blank">
|
214 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics active learning integrations"></a>
|
215 |
+
<br>
|
216 |
+
<br>
|
217 |
+
|
218 |
+
<div align="center">
|
219 |
+
<a href="https://roboflow.com/?ref=ultralytics">
|
220 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-roboflow.png" width="10%" alt="Roboflow logo"></a>
|
221 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
222 |
+
<a href="https://clear.ml/">
|
223 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-clearml.png" width="10%" alt="ClearML logo"></a>
|
224 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
225 |
+
<a href="https://bit.ly/yolov8-readme-comet">
|
226 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" alt="Comet ML logo"></a>
|
227 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
228 |
+
<a href="https://bit.ly/yolov5-neuralmagic">
|
229 |
+
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="NeuralMagic logo"></a>
|
230 |
+
</div>
|
231 |
+
|
232 |
+
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
|
233 |
+
| :--------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
|
234 |
+
| Label and export your custom datasets directly to YOLO11 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLO11 using [ClearML](https://clear.ml/) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLO11 models, resume training, and interactively visualize and debug predictions | Run YOLO11 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
|
235 |
+
|
236 |
+
## <div align="center">Ultralytics HUB</div>
|
237 |
+
|
238 |
+
体验无缝 AI 使用 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐,一个集数据可视化、YOLO11 🚀 模型训练和部署于一体的解决方案,无需编写代码。利用我们最先进的平台和用户友好的 [Ultralytics 应用](https://www.ultralytics.com/app-install),将图像转换为可操作见解,并轻松实现您的 AI 愿景。免费开始您的旅程!
|
239 |
+
|
240 |
+
<a href="https://www.ultralytics.com/hub" target="_blank">
|
241 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB preview image"></a>
|
242 |
+
|
243 |
+
## <div align="center">贡献</div>
|
244 |
+
|
245 |
+
我们欢迎您的意见!没有社区的帮助,Ultralytics YOLO 就不可能实现。请参阅我们的 [贡献指南](https://docs.ultralytics.com/help/contributing/) 开始,并填写我们的 [调查问卷](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 向我们提供您体验的反馈。感谢所有贡献者 🙏!
|
246 |
+
|
247 |
+
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
|
248 |
+
|
249 |
+
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
|
250 |
+
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a>
|
251 |
+
|
252 |
+
## <div align="center">许可</div>
|
253 |
+
|
254 |
+
Ultralytics 提供两种许可选项以适应各种用例:
|
255 |
+
|
256 |
+
- **AGPL-3.0 许可**:这是一个 [OSI 批准](https://opensource.org/license) 的开源许可,适合学生和爱好者,促进开放协作和知识共享。有关详细信息,请参阅 [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件。
|
257 |
+
- **企业许可**:专为商业使用设计,此许可允许将 Ultralytics 软件和 AI 模型无缝集成到商业产品和服务中,无需满足 AGPL-3.0 的开源要求。如果您的场景涉及将我们的解决方案嵌入到商业产品,请通过 [Ultralytics Licensing](https://www.ultralytics.com/license) 联系我们。
|
258 |
+
|
259 |
+
## <div align="center">联系</div>
|
260 |
+
|
261 |
+
如需 Ultralytics 的错误报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/ultralytics/issues)。成为 Ultralytics [Discord](https://discord.com/invite/ultralytics)、[Reddit](https://www.reddit.com/r/ultralytics/) 或 [论坛](https://community.ultralytics.com/) 的成员,提出问题、分享项目、探讨学习讨论,或寻求所有 Ultralytics 相关的帮助!
|
262 |
+
|
263 |
+
<br>
|
264 |
+
<div align="center">
|
265 |
+
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
|
266 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
267 |
+
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
|
268 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
269 |
+
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
|
270 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
271 |
+
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
|
272 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
273 |
+
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
|
274 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
275 |
+
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
|
276 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
277 |
+
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
|
278 |
+
</div>
|
docker/Dockerfile
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:latest image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Image is CUDA-optimized for YOLO11 single/multi-GPU training and inference
|
4 |
+
|
5 |
+
# Start FROM PyTorch image https://hub.docker.com/r/pytorch/pytorch or nvcr.io/nvidia/pytorch:23.03-py3
|
6 |
+
FROM pytorch/pytorch:2.4.1-cuda12.1-cudnn9-runtime
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
# Avoid DDP error "MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library" https://github.com/pytorch/pytorch/issues/37377
|
10 |
+
ENV PYTHONUNBUFFERED=1 \
|
11 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
12 |
+
PIP_NO_CACHE_DIR=1 \
|
13 |
+
PIP_BREAK_SYSTEM_PACKAGES=1 \
|
14 |
+
MKL_THREADING_LAYER=GNU \
|
15 |
+
OMP_NUM_THREADS=1
|
16 |
+
|
17 |
+
# Downloads to user config dir
|
18 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
19 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
20 |
+
/root/.config/Ultralytics/
|
21 |
+
|
22 |
+
# Install linux packages
|
23 |
+
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
|
24 |
+
# libsm6 required by libqxcb to create QT-based windows for visualization; set 'QT_DEBUG_PLUGINS=1' to test in docker
|
25 |
+
RUN apt-get update && \
|
26 |
+
apt-get install -y --no-install-recommends \
|
27 |
+
gcc git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 libsm6 \
|
28 |
+
&& rm -rf /var/lib/apt/lists/*
|
29 |
+
|
30 |
+
# Security updates
|
31 |
+
# https://security.snyk.io/vuln/SNYK-UBUNTU1804-OPENSSL-3314796
|
32 |
+
RUN apt upgrade --no-install-recommends -y openssl tar
|
33 |
+
|
34 |
+
# Create working directory
|
35 |
+
WORKDIR /ultralytics
|
36 |
+
|
37 |
+
# Copy contents and configure git
|
38 |
+
COPY . .
|
39 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
40 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
41 |
+
|
42 |
+
# Install pip packages
|
43 |
+
RUN python3 -m pip install --upgrade pip wheel
|
44 |
+
# Pin TensorRT-cu12==10.1.0 to avoid 10.2.0 bug https://github.com/ultralytics/ultralytics/pull/14239 (note -cu12 must be used)
|
45 |
+
RUN pip install -e ".[export]" "tensorrt-cu12==10.1.0" "albumentations>=1.4.6" comet pycocotools
|
46 |
+
|
47 |
+
# Run exports to AutoInstall packages
|
48 |
+
# Edge TPU export fails the first time so is run twice here
|
49 |
+
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32 || yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
|
50 |
+
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
|
51 |
+
# Requires <= Python 3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
|
52 |
+
RUN pip install "paddlepaddle>=2.6.0" x2paddle
|
53 |
+
# Fix error: `np.bool` was a deprecated alias for the builtin `bool` segmentation error in Tests
|
54 |
+
RUN pip install numpy==1.23.5
|
55 |
+
|
56 |
+
# Remove extra build files
|
57 |
+
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
|
58 |
+
|
59 |
+
|
60 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
61 |
+
|
62 |
+
# Build and Push
|
63 |
+
# t=ultralytics/ultralytics:latest && sudo docker build -f docker/Dockerfile -t $t . && sudo docker push $t
|
64 |
+
|
65 |
+
# Pull and Run with access to all GPUs
|
66 |
+
# t=ultralytics/ultralytics:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t
|
67 |
+
|
68 |
+
# Pull and Run with access to GPUs 2 and 3 (inside container CUDA devices will appear as 0 and 1)
|
69 |
+
# t=ultralytics/ultralytics:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus '"device=2,3"' $t
|
70 |
+
|
71 |
+
# Pull and Run with local directory access
|
72 |
+
# t=ultralytics/ultralytics:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/shared/datasets:/datasets $t
|
73 |
+
|
74 |
+
# Kill all
|
75 |
+
# sudo docker kill $(sudo docker ps -q)
|
76 |
+
|
77 |
+
# Kill all image-based
|
78 |
+
# sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/ultralytics:latest)
|
79 |
+
|
80 |
+
# DockerHub tag update
|
81 |
+
# t=ultralytics/ultralytics:latest tnew=ultralytics/ultralytics:v6.2 && sudo docker pull $t && sudo docker tag $t $tnew && sudo docker push $tnew
|
82 |
+
|
83 |
+
# Clean up
|
84 |
+
# sudo docker system prune -a --volumes
|
85 |
+
|
86 |
+
# Update Ubuntu drivers
|
87 |
+
# https://www.maketecheasier.com/install-nvidia-drivers-ubuntu/
|
88 |
+
|
89 |
+
# DDP test
|
90 |
+
# python -m torch.distributed.run --nproc_per_node 2 --master_port 1 train.py --epochs 3
|
91 |
+
|
92 |
+
# GCP VM from Image
|
93 |
+
# docker.io/ultralytics/ultralytics:latest
|
docker/Dockerfile-arm64
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:latest-arm64 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Image is aarch64-compatible for Apple M1, M2, M3, Raspberry Pi and other ARM architectures
|
4 |
+
|
5 |
+
# Start FROM Ubuntu image https://hub.docker.com/_/ubuntu with "FROM arm64v8/ubuntu:22.04" (deprecated)
|
6 |
+
# Start FROM Debian image for arm64v8 https://hub.docker.com/r/arm64v8/debian (new)
|
7 |
+
FROM arm64v8/debian:bookworm-slim
|
8 |
+
|
9 |
+
# Set environment variables
|
10 |
+
ENV PYTHONUNBUFFERED=1 \
|
11 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
12 |
+
PIP_NO_CACHE_DIR=1 \
|
13 |
+
PIP_BREAK_SYSTEM_PACKAGES=1
|
14 |
+
|
15 |
+
# Downloads to user config dir
|
16 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
17 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
18 |
+
/root/.config/Ultralytics/
|
19 |
+
|
20 |
+
# Install linux packages
|
21 |
+
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
|
22 |
+
# pkg-config and libhdf5-dev (not included) are needed to build 'h5py==3.11.0' aarch64 wheel required by 'tensorflow'
|
23 |
+
RUN apt-get update && \
|
24 |
+
apt-get install -y --no-install-recommends \
|
25 |
+
python3-pip git zip unzip wget curl htop gcc libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
|
26 |
+
&& rm -rf /var/lib/apt/lists/*
|
27 |
+
|
28 |
+
# Create working directory
|
29 |
+
WORKDIR /ultralytics
|
30 |
+
|
31 |
+
# Copy contents and configure git
|
32 |
+
COPY . .
|
33 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
34 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
35 |
+
|
36 |
+
# Install pip packages
|
37 |
+
RUN python3 -m pip install --upgrade pip wheel
|
38 |
+
RUN pip install -e ".[export]"
|
39 |
+
|
40 |
+
# Creates a symbolic link to make 'python' point to 'python3'
|
41 |
+
RUN ln -sf /usr/bin/python3 /usr/bin/python
|
42 |
+
|
43 |
+
# Remove extra build files
|
44 |
+
RUN rm -rf /root/.config/Ultralytics/persistent_cache.json
|
45 |
+
|
46 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
47 |
+
|
48 |
+
# Build and Push
|
49 |
+
# t=ultralytics/ultralytics:latest-arm64 && sudo docker build --platform linux/arm64 -f docker/Dockerfile-arm64 -t $t . && sudo docker push $t
|
50 |
+
|
51 |
+
# Run
|
52 |
+
# t=ultralytics/ultralytics:latest-arm64 && sudo docker run -it --ipc=host $t
|
53 |
+
|
54 |
+
# Pull and Run
|
55 |
+
# t=ultralytics/ultralytics:latest-arm64 && sudo docker pull $t && sudo docker run -it --ipc=host $t
|
56 |
+
|
57 |
+
# Pull and Run with local volume mounted
|
58 |
+
# t=ultralytics/ultralytics:latest-arm64 && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/shared/datasets:/datasets $t
|
docker/Dockerfile-conda
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:latest-conda image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Image is optimized for Ultralytics Anaconda (https://anaconda.org/conda-forge/ultralytics) installation and usage
|
4 |
+
|
5 |
+
# Start FROM miniconda3 image https://hub.docker.com/r/continuumio/miniconda3
|
6 |
+
FROM continuumio/miniconda3:latest
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
11 |
+
PIP_NO_CACHE_DIR=1 \
|
12 |
+
PIP_BREAK_SYSTEM_PACKAGES=1
|
13 |
+
|
14 |
+
# Downloads to user config dir
|
15 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
16 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
17 |
+
/root/.config/Ultralytics/
|
18 |
+
|
19 |
+
# Install linux packages
|
20 |
+
RUN apt-get update && \
|
21 |
+
apt-get install -y --no-install-recommends \
|
22 |
+
libgl1 \
|
23 |
+
&& rm -rf /var/lib/apt/lists/*
|
24 |
+
|
25 |
+
# Copy contents
|
26 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
27 |
+
|
28 |
+
# Install conda packages
|
29 |
+
# mkl required to fix 'OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory'
|
30 |
+
RUN conda config --set solver libmamba && \
|
31 |
+
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia && \
|
32 |
+
conda install -c conda-forge ultralytics mkl
|
33 |
+
# conda install -c pytorch -c nvidia -c conda-forge pytorch torchvision pytorch-cuda=12.1 ultralytics mkl
|
34 |
+
|
35 |
+
# Remove extra build files
|
36 |
+
RUN rm -rf /root/.config/Ultralytics/persistent_cache.json
|
37 |
+
|
38 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
39 |
+
|
40 |
+
# Build and Push
|
41 |
+
# t=ultralytics/ultralytics:latest-conda && sudo docker build -f docker/Dockerfile-cpu -t $t . && sudo docker push $t
|
42 |
+
|
43 |
+
# Run
|
44 |
+
# t=ultralytics/ultralytics:latest-conda && sudo docker run -it --ipc=host $t
|
45 |
+
|
46 |
+
# Pull and Run
|
47 |
+
# t=ultralytics/ultralytics:latest-conda && sudo docker pull $t && sudo docker run -it --ipc=host $t
|
48 |
+
|
49 |
+
# Pull and Run with local volume mounted
|
50 |
+
# t=ultralytics/ultralytics:latest-conda && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/shared/datasets:/datasets $t
|
docker/Dockerfile-cpu
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLO11 deployments
|
4 |
+
|
5 |
+
# Start FROM Ubuntu image https://hub.docker.com/_/ubuntu
|
6 |
+
FROM ubuntu:23.10
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
11 |
+
PIP_NO_CACHE_DIR=1 \
|
12 |
+
PIP_BREAK_SYSTEM_PACKAGES=1
|
13 |
+
|
14 |
+
# Downloads to user config dir
|
15 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
16 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
17 |
+
/root/.config/Ultralytics/
|
18 |
+
|
19 |
+
# Install linux packages
|
20 |
+
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
|
21 |
+
RUN apt-get update && \
|
22 |
+
apt-get install -y --no-install-recommends \
|
23 |
+
python3-pip git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
|
24 |
+
&& rm -rf /var/lib/apt/lists/*
|
25 |
+
|
26 |
+
# Create working directory
|
27 |
+
WORKDIR /ultralytics
|
28 |
+
|
29 |
+
# Copy contents and configure git
|
30 |
+
COPY . .
|
31 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
32 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
33 |
+
|
34 |
+
# Install pip packages
|
35 |
+
RUN python3 -m pip install --upgrade pip wheel
|
36 |
+
RUN pip install -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
|
37 |
+
|
38 |
+
# Run exports to AutoInstall packages
|
39 |
+
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
|
40 |
+
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
|
41 |
+
# Requires Python<=3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
|
42 |
+
# RUN pip install "paddlepaddle>=2.6.0" x2paddle
|
43 |
+
|
44 |
+
# Creates a symbolic link to make 'python' point to 'python3'
|
45 |
+
RUN ln -sf /usr/bin/python3 /usr/bin/python
|
46 |
+
|
47 |
+
# Remove extra build files
|
48 |
+
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
|
49 |
+
|
50 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
51 |
+
|
52 |
+
# Build and Push
|
53 |
+
# t=ultralytics/ultralytics:latest-cpu && sudo docker build -f docker/Dockerfile-cpu -t $t . && sudo docker push $t
|
54 |
+
|
55 |
+
# Run
|
56 |
+
# t=ultralytics/ultralytics:latest-cpu && sudo docker run -it --ipc=host --name NAME $t
|
57 |
+
|
58 |
+
# Pull and Run
|
59 |
+
# t=ultralytics/ultralytics:latest-cpu && sudo docker pull $t && sudo docker run -it --ipc=host --name NAME $t
|
60 |
+
|
61 |
+
# Pull and Run with local volume mounted
|
62 |
+
# t=ultralytics/ultralytics:latest-cpu && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/shared/datasets:/datasets $t
|
docker/Dockerfile-jetson-jetpack4
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:jetson-jetpack4 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Supports JetPack4.x for YOLO11 on Jetson Nano, TX2, Xavier NX, AGX Xavier
|
4 |
+
|
5 |
+
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-cuda
|
6 |
+
FROM nvcr.io/nvidia/l4t-cuda:10.2.460-runtime
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1
|
11 |
+
|
12 |
+
# Downloads to user config dir
|
13 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
14 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
15 |
+
/root/.config/Ultralytics/
|
16 |
+
|
17 |
+
# Add NVIDIA repositories for TensorRT dependencies
|
18 |
+
RUN wget -q -O - https://repo.download.nvidia.com/jetson/jetson-ota-public.asc | apt-key add - && \
|
19 |
+
echo "deb https://repo.download.nvidia.com/jetson/common r32.7 main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list && \
|
20 |
+
echo "deb https://repo.download.nvidia.com/jetson/t194 r32.7 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
|
21 |
+
|
22 |
+
# Install dependencies
|
23 |
+
RUN apt-get update && \
|
24 |
+
apt-get install -y --no-install-recommends \
|
25 |
+
git python3.8 python3.8-dev python3-pip python3-libnvinfer libopenmpi-dev libopenblas-base libomp-dev gcc \
|
26 |
+
&& rm -rf /var/lib/apt/lists/*
|
27 |
+
|
28 |
+
# Create symbolic links for python3.8 and pip3
|
29 |
+
RUN ln -sf /usr/bin/python3.8 /usr/bin/python3
|
30 |
+
RUN ln -s /usr/bin/pip3 /usr/bin/pip
|
31 |
+
|
32 |
+
# Create working directory
|
33 |
+
WORKDIR /ultralytics
|
34 |
+
|
35 |
+
# Copy contents and configure git
|
36 |
+
COPY . .
|
37 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
38 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
39 |
+
|
40 |
+
# Download onnxruntime-gpu 1.8.0 and tensorrt 8.2.0.6
|
41 |
+
# Other versions can be seen in https://elinux.org/Jetson_Zoo and https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
|
42 |
+
ADD https://nvidia.box.com/shared/static/gjqofg7rkg97z3gc8jeyup6t8n9j8xjw.whl onnxruntime_gpu-1.8.0-cp38-cp38-linux_aarch64.whl
|
43 |
+
ADD https://forums.developer.nvidia.com/uploads/short-url/hASzFOm9YsJx6VVFrDW1g44CMmv.whl tensorrt-8.2.0.6-cp38-none-linux_aarch64.whl
|
44 |
+
|
45 |
+
# Install pip packages
|
46 |
+
RUN python3 -m pip install --upgrade pip wheel
|
47 |
+
RUN pip install \
|
48 |
+
onnxruntime_gpu-1.8.0-cp38-cp38-linux_aarch64.whl \
|
49 |
+
tensorrt-8.2.0.6-cp38-none-linux_aarch64.whl \
|
50 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-1.11.0a0+gitbc2c6ed-cp38-cp38-linux_aarch64.whl \
|
51 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.12.0a0+9b5a3fe-cp38-cp38-linux_aarch64.whl
|
52 |
+
RUN pip install -e ".[export]"
|
53 |
+
|
54 |
+
# Remove extra build files
|
55 |
+
RUN rm -rf *.whl /root/.config/Ultralytics/persistent_cache.json
|
56 |
+
|
57 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
58 |
+
|
59 |
+
# Build and Push
|
60 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack4 && sudo docker build --platform linux/arm64 -f docker/Dockerfile-jetson-jetpack4 -t $t . && sudo docker push $t
|
61 |
+
|
62 |
+
# Run
|
63 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack4 && sudo docker run -it --ipc=host $t
|
64 |
+
|
65 |
+
# Pull and Run
|
66 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack4 && sudo docker pull $t && sudo docker run -it --ipc=host $t
|
67 |
+
|
68 |
+
# Pull and Run with NVIDIA runtime
|
69 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack4 && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
|
docker/Dockerfile-jetson-jetpack5
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:jetson-jetson-jetpack5 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Supports JetPack5.x for YOLO11 on Jetson Xavier NX, AGX Xavier, AGX Orin, Orin Nano and Orin NX
|
4 |
+
|
5 |
+
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch
|
6 |
+
FROM nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
11 |
+
PIP_NO_CACHE_DIR=1 \
|
12 |
+
PIP_BREAK_SYSTEM_PACKAGES=1
|
13 |
+
|
14 |
+
# Downloads to user config dir
|
15 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
16 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
17 |
+
/root/.config/Ultralytics/
|
18 |
+
|
19 |
+
# Install linux packages
|
20 |
+
# g++ required to build 'tflite_support' and 'lap' packages
|
21 |
+
# libusb-1.0-0 required for 'tflite_support' package when exporting to TFLite
|
22 |
+
# pkg-config and libhdf5-dev (not included) are needed to build 'h5py==3.11.0' aarch64 wheel required by 'tensorflow'
|
23 |
+
RUN apt-get update && \
|
24 |
+
apt-get install -y --no-install-recommends \
|
25 |
+
gcc git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
|
26 |
+
&& rm -rf /var/lib/apt/lists/*
|
27 |
+
|
28 |
+
# Create working directory
|
29 |
+
WORKDIR /ultralytics
|
30 |
+
|
31 |
+
# Copy contents and configure git
|
32 |
+
COPY . .
|
33 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
34 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
35 |
+
|
36 |
+
# Remove opencv-python from Ultralytics dependencies as it conflicts with opencv-python installed in base image
|
37 |
+
RUN sed -i '/opencv-python/d' pyproject.toml
|
38 |
+
|
39 |
+
# Download onnxruntime-gpu 1.15.1 for Jetson Linux 35.2.1 (JetPack 5.1). Other versions can be seen in https://elinux.org/Jetson_Zoo#ONNX_Runtime
|
40 |
+
ADD https://nvidia.box.com/shared/static/mvdcltm9ewdy2d5nurkiqorofz1s53ww.whl onnxruntime_gpu-1.15.1-cp38-cp38-linux_aarch64.whl
|
41 |
+
|
42 |
+
# Install pip packages manually for TensorRT compatibility https://github.com/NVIDIA/TensorRT/issues/2567
|
43 |
+
RUN python3 -m pip install --upgrade pip wheel
|
44 |
+
RUN pip install onnxruntime_gpu-1.15.1-cp38-cp38-linux_aarch64.whl
|
45 |
+
RUN pip install -e ".[export]"
|
46 |
+
|
47 |
+
# Remove extra build files
|
48 |
+
RUN rm -rf *.whl /root/.config/Ultralytics/persistent_cache.json
|
49 |
+
|
50 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
51 |
+
|
52 |
+
# Build and Push
|
53 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack5 && sudo docker build --platform linux/arm64 -f docker/Dockerfile-jetson-jetpack5 -t $t . && sudo docker push $t
|
54 |
+
|
55 |
+
# Run
|
56 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack5 && sudo docker run -it --ipc=host $t
|
57 |
+
|
58 |
+
# Pull and Run
|
59 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack5 && sudo docker pull $t && sudo docker run -it --ipc=host $t
|
60 |
+
|
61 |
+
# Pull and Run with NVIDIA runtime
|
62 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack5 && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
|
docker/Dockerfile-jetson-jetpack6
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:jetson-jetpack6 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Supports JetPack6.x for YOLO11 on Jetson AGX Orin, Orin NX and Orin Nano Series
|
4 |
+
|
5 |
+
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-jetpack
|
6 |
+
FROM nvcr.io/nvidia/l4t-jetpack:r36.3.0
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
11 |
+
PIP_NO_CACHE_DIR=1 \
|
12 |
+
PIP_BREAK_SYSTEM_PACKAGES=1
|
13 |
+
|
14 |
+
# Downloads to user config dir
|
15 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
16 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
17 |
+
/root/.config/Ultralytics/
|
18 |
+
|
19 |
+
# Install dependencies
|
20 |
+
RUN apt-get update && \
|
21 |
+
apt-get install -y --no-install-recommends \
|
22 |
+
git python3-pip libopenmpi-dev libopenblas-base libomp-dev \
|
23 |
+
&& rm -rf /var/lib/apt/lists/*
|
24 |
+
|
25 |
+
# Create working directory
|
26 |
+
WORKDIR /ultralytics
|
27 |
+
|
28 |
+
# Copy contents and configure git
|
29 |
+
COPY . .
|
30 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
31 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
32 |
+
|
33 |
+
# Download onnxruntime-gpu 1.18.0 from https://elinux.org/Jetson_Zoo and https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
|
34 |
+
ADD https://nvidia.box.com/shared/static/48dtuob7meiw6ebgfsfqakc9vse62sg4.whl onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl
|
35 |
+
|
36 |
+
# Pip install onnxruntime-gpu, torch, torchvision and ultralytics
|
37 |
+
RUN python3 -m pip install --upgrade pip wheel
|
38 |
+
RUN pip install \
|
39 |
+
onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl \
|
40 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-2.3.0-cp310-cp310-linux_aarch64.whl \
|
41 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl
|
42 |
+
RUN pip install -e ".[export]"
|
43 |
+
|
44 |
+
# Remove extra build files
|
45 |
+
RUN rm -rf *.whl /root/.config/Ultralytics/persistent_cache.json
|
46 |
+
|
47 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
48 |
+
|
49 |
+
# Build and Push
|
50 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack6 && sudo docker build --platform linux/arm64 -f docker/Dockerfile-jetson-jetpack6 -t $t . && sudo docker push $t
|
51 |
+
|
52 |
+
# Run
|
53 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack6 && sudo docker run -it --ipc=host $t
|
54 |
+
|
55 |
+
# Pull and Run
|
56 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack6 && sudo docker pull $t && sudo docker run -it --ipc=host $t
|
57 |
+
|
58 |
+
# Pull and Run with NVIDIA runtime
|
59 |
+
# t=ultralytics/ultralytics:latest-jetson-jetpack6 && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
|
docker/Dockerfile-python
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLO11 deployments
|
4 |
+
|
5 |
+
# Use official Python base image for reproducibility (3.11.10 for export and 3.12.6 for inference)
|
6 |
+
FROM python:3.11.10-slim-bookworm
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
11 |
+
PIP_NO_CACHE_DIR=1 \
|
12 |
+
PIP_BREAK_SYSTEM_PACKAGES=1
|
13 |
+
|
14 |
+
# Downloads to user config dir
|
15 |
+
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
|
16 |
+
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
|
17 |
+
/root/.config/Ultralytics/
|
18 |
+
|
19 |
+
# Install linux packages
|
20 |
+
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
|
21 |
+
RUN apt-get update && \
|
22 |
+
apt-get install -y --no-install-recommends \
|
23 |
+
python3-pip git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
|
24 |
+
&& rm -rf /var/lib/apt/lists/*
|
25 |
+
|
26 |
+
# Create working directory
|
27 |
+
WORKDIR /ultralytics
|
28 |
+
|
29 |
+
# Copy contents and configure git
|
30 |
+
COPY . .
|
31 |
+
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
|
32 |
+
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
|
33 |
+
|
34 |
+
# Install pip packages
|
35 |
+
RUN python3 -m pip install --upgrade pip wheel
|
36 |
+
RUN pip install -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
|
37 |
+
|
38 |
+
# Run exports to AutoInstall packages
|
39 |
+
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
|
40 |
+
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
|
41 |
+
# Requires Python<=3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
|
42 |
+
RUN pip install "paddlepaddle>=2.6.0" x2paddle
|
43 |
+
|
44 |
+
# Remove extra build files
|
45 |
+
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
|
46 |
+
|
47 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
48 |
+
|
49 |
+
# Build and Push
|
50 |
+
# t=ultralytics/ultralytics:latest-python && sudo docker build -f docker/Dockerfile-python -t $t . && sudo docker push $t
|
51 |
+
|
52 |
+
# Run
|
53 |
+
# t=ultralytics/ultralytics:latest-python && sudo docker run -it --ipc=host $t
|
54 |
+
|
55 |
+
# Pull and Run
|
56 |
+
# t=ultralytics/ultralytics:latest-python && sudo docker pull $t && sudo docker run -it --ipc=host $t
|
57 |
+
|
58 |
+
# Pull and Run with local volume mounted
|
59 |
+
# t=ultralytics/ultralytics:latest-python && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/shared/datasets:/datasets $t
|
docker/Dockerfile-runner
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
# Builds GitHub actions CI runner image for deployment to DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
3 |
+
# Image is CUDA-optimized for YOLO11 single/multi-GPU training and inference tests
|
4 |
+
|
5 |
+
# Start FROM Ultralytics GPU image
|
6 |
+
FROM ultralytics/ultralytics:latest
|
7 |
+
|
8 |
+
# Set environment variables
|
9 |
+
ENV PYTHONUNBUFFERED=1 \
|
10 |
+
PYTHONDONTWRITEBYTECODE=1 \
|
11 |
+
PIP_NO_CACHE_DIR=1 \
|
12 |
+
PIP_BREAK_SYSTEM_PACKAGES=1 \
|
13 |
+
RUNNER_ALLOW_RUNASROOT=1 \
|
14 |
+
DEBIAN_FRONTEND=noninteractive
|
15 |
+
|
16 |
+
# Set the working directory
|
17 |
+
WORKDIR /actions-runner
|
18 |
+
|
19 |
+
# Download and unpack the latest runner from https://github.com/actions/runner
|
20 |
+
RUN FILENAME=actions-runner-linux-x64-2.317.0.tar.gz && \
|
21 |
+
curl -o $FILENAME -L https://github.com/actions/runner/releases/download/v2.317.0/$FILENAME && \
|
22 |
+
tar xzf $FILENAME && \
|
23 |
+
rm $FILENAME
|
24 |
+
|
25 |
+
# Install runner dependencies
|
26 |
+
RUN pip install pytest-cov
|
27 |
+
RUN ./bin/installdependencies.sh && \
|
28 |
+
apt-get -y install libicu-dev
|
29 |
+
|
30 |
+
# Inline ENTRYPOINT command to configure and start runner with default TOKEN and NAME
|
31 |
+
ENTRYPOINT sh -c './config.sh --url https://github.com/ultralytics/ultralytics \
|
32 |
+
--token ${GITHUB_RUNNER_TOKEN:-TOKEN} \
|
33 |
+
--name ${GITHUB_RUNNER_NAME:-NAME} \
|
34 |
+
--labels gpu-latest \
|
35 |
+
--replace && \
|
36 |
+
./run.sh'
|
37 |
+
|
38 |
+
|
39 |
+
# Usage Examples -------------------------------------------------------------------------------------------------------
|
40 |
+
|
41 |
+
# Build and Push
|
42 |
+
# t=ultralytics/ultralytics:latest-runner && sudo docker build -f docker/Dockerfile-runner -t $t . && sudo docker push $t
|
43 |
+
|
44 |
+
# Pull and Run in detached mode with access to GPUs 0 and 1
|
45 |
+
# t=ultralytics/ultralytics:latest-runner && sudo docker run -d -e GITHUB_RUNNER_TOKEN=TOKEN -e GITHUB_RUNNER_NAME=NAME --ipc=host --gpus '"device=0,1"' $t
|
docs/README.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<br>
|
2 |
+
<a href="https://www.ultralytics.com/" target="_blank"><img src="https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Original.svg" width="320" alt="Ultralytics logo"></a>
|
3 |
+
|
4 |
+
# 📚 Ultralytics Docs
|
5 |
+
|
6 |
+
[Ultralytics](https://www.ultralytics.com/) Docs are the gateway to understanding and utilizing our cutting-edge machine learning tools. These documents are deployed to [https://docs.ultralytics.com](https://docs.ultralytics.com/) for your convenience.
|
7 |
+
|
8 |
+
[](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment)
|
9 |
+
[](https://github.com/ultralytics/docs/actions/workflows/links.yml)
|
10 |
+
[](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml)
|
11 |
+
[](https://github.com/ultralytics/docs/actions/workflows/format.yml)
|
12 |
+
|
13 |
+
<a href="https://discord.com/invite/ultralytics"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
|
14 |
+
|
15 |
+
## 🛠️ Installation
|
16 |
+
|
17 |
+
[](https://pypi.org/project/ultralytics/)
|
18 |
+
[](https://pepy.tech/project/ultralytics)
|
19 |
+
[](https://pypi.org/project/ultralytics/)
|
20 |
+
|
21 |
+
To install the ultralytics package in developer mode, ensure you have Git and Python 3 installed on your system. Then, follow these steps:
|
22 |
+
|
23 |
+
1. Clone the ultralytics repository to your local machine using Git:
|
24 |
+
|
25 |
+
```bash
|
26 |
+
git clone https://github.com/ultralytics/ultralytics.git
|
27 |
+
```
|
28 |
+
|
29 |
+
2. Navigate to the cloned repository's root directory:
|
30 |
+
|
31 |
+
```bash
|
32 |
+
cd ultralytics
|
33 |
+
```
|
34 |
+
|
35 |
+
3. Install the package in developer mode using pip (or pip3 for Python 3):
|
36 |
+
|
37 |
+
```bash
|
38 |
+
pip install -e '.[dev]'
|
39 |
+
```
|
40 |
+
|
41 |
+
- This command installs the ultralytics package along with all development dependencies, allowing you to modify the package code and have the changes immediately reflected in your Python environment.
|
42 |
+
|
43 |
+
## 🚀 Building and Serving Locally
|
44 |
+
|
45 |
+
The `mkdocs serve` command builds and serves a local version of your MkDocs documentation, ideal for development and testing:
|
46 |
+
|
47 |
+
```bash
|
48 |
+
mkdocs serve
|
49 |
+
```
|
50 |
+
|
51 |
+
- #### Command Breakdown:
|
52 |
+
|
53 |
+
- `mkdocs` is the main MkDocs command-line interface.
|
54 |
+
- `serve` is the subcommand to build and locally serve your documentation.
|
55 |
+
|
56 |
+
- 🧐 Note:
|
57 |
+
|
58 |
+
- Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading.
|
59 |
+
- To stop the local server, press `CTRL+C`.
|
60 |
+
|
61 |
+
## 🌍 Building and Serving Multi-Language
|
62 |
+
|
63 |
+
Supporting multi-language documentation? Follow these steps:
|
64 |
+
|
65 |
+
1. Stage all new language \*.md files with Git:
|
66 |
+
|
67 |
+
```bash
|
68 |
+
git add docs/**/*.md -f
|
69 |
+
```
|
70 |
+
|
71 |
+
2. Build all languages to the `/site` folder, ensuring relevant root-level files are present:
|
72 |
+
|
73 |
+
```bash
|
74 |
+
# Clear existing /site directory
|
75 |
+
rm -rf site
|
76 |
+
|
77 |
+
# Loop through each language config file and build
|
78 |
+
mkdocs build -f docs/mkdocs.yml
|
79 |
+
for file in docs/mkdocs_*.yml; do
|
80 |
+
echo "Building MkDocs site with $file"
|
81 |
+
mkdocs build -f "$file"
|
82 |
+
done
|
83 |
+
```
|
84 |
+
|
85 |
+
3. To preview your site, initiate a simple HTTP server:
|
86 |
+
|
87 |
+
```bash
|
88 |
+
cd site
|
89 |
+
python -m http.server
|
90 |
+
# Open in your preferred browser
|
91 |
+
```
|
92 |
+
|
93 |
+
- 🖥️ Access the live site at `http://localhost:8000`.
|
94 |
+
|
95 |
+
## 📤 Deploying Your Documentation Site
|
96 |
+
|
97 |
+
Choose a hosting provider and deployment method for your MkDocs documentation:
|
98 |
+
|
99 |
+
- Configure `mkdocs.yml` with deployment settings.
|
100 |
+
- Use `mkdocs deploy` to build and deploy your site.
|
101 |
+
|
102 |
+
* ### GitHub Pages Deployment Example:
|
103 |
+
|
104 |
+
```bash
|
105 |
+
mkdocs gh-deploy
|
106 |
+
```
|
107 |
+
|
108 |
+
- Update the "Custom domain" in your repository's settings for a personalized URL.
|
109 |
+
|
110 |
+

|
111 |
+
|
112 |
+
- For detailed deployment guidance, consult the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/).
|
113 |
+
|
114 |
+
## 💡 Contribute
|
115 |
+
|
116 |
+
We cherish the community's input as it drives Ultralytics open-source initiatives. Dive into the [Contributing Guide](https://docs.ultralytics.com/help/contributing/) and share your thoughts via our [Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A heartfelt thank you 🙏 to each contributor!
|
117 |
+
|
118 |
+

|
119 |
+
|
120 |
+
## 📜 License
|
121 |
+
|
122 |
+
Ultralytics Docs presents two licensing options:
|
123 |
+
|
124 |
+
- **AGPL-3.0 License**: Perfect for academia and open collaboration. Details are in the [LICENSE](https://github.com/ultralytics/docs/blob/main/LICENSE) file.
|
125 |
+
- **Enterprise License**: Tailored for commercial usage, offering a seamless blend of Ultralytics technology in your products. Learn more at [Ultralytics Licensing](https://www.ultralytics.com/license).
|
126 |
+
|
127 |
+
## ✉️ Contact
|
128 |
+
|
129 |
+
For Ultralytics bug reports and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). Become a member of the Ultralytics [Discord](https://discord.com/invite/ultralytics), [Reddit](https://www.reddit.com/r/ultralytics/), or [Forums](https://community.ultralytics.com/) for asking questions, sharing projects, learning discussions, or for help with all things Ultralytics!
|
130 |
+
|
131 |
+
<br>
|
132 |
+
<div align="center">
|
133 |
+
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
|
134 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
135 |
+
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
|
136 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
137 |
+
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
|
138 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
139 |
+
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
|
140 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
141 |
+
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
|
142 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
143 |
+
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
|
144 |
+
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
145 |
+
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
|
146 |
+
</div>
|
docs/build_docs.py
ADDED
@@ -0,0 +1,258 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
"""
|
3 |
+
Automates the building and post-processing of MkDocs documentation, particularly for projects with multilingual content.
|
4 |
+
It streamlines the workflow for generating localized versions of the documentation and updating HTML links to ensure
|
5 |
+
they are correctly formatted.
|
6 |
+
|
7 |
+
Key Features:
|
8 |
+
- Automated building of MkDocs documentation: The script compiles both the main documentation and
|
9 |
+
any localized versions specified in separate MkDocs configuration files.
|
10 |
+
- Post-processing of generated HTML files: After the documentation is built, the script updates all
|
11 |
+
HTML files to remove the '.md' extension from internal links. This ensures that links in the built
|
12 |
+
HTML documentation correctly point to other HTML pages rather than Markdown files, which is crucial
|
13 |
+
for proper navigation within the web-based documentation.
|
14 |
+
|
15 |
+
Usage:
|
16 |
+
- Run the script from the root directory of your MkDocs project.
|
17 |
+
- Ensure that MkDocs is installed and that all MkDocs configuration files (main and localized versions)
|
18 |
+
are present in the project directory.
|
19 |
+
- The script first builds the documentation using MkDocs, then scans the generated HTML files in the 'site'
|
20 |
+
directory to update the internal links.
|
21 |
+
- It's ideal for projects where the documentation is written in Markdown and needs to be served as a static website.
|
22 |
+
|
23 |
+
Note:
|
24 |
+
- This script is built to be run in an environment where Python and MkDocs are installed and properly configured.
|
25 |
+
"""
|
26 |
+
|
27 |
+
import os
|
28 |
+
import re
|
29 |
+
import shutil
|
30 |
+
import subprocess
|
31 |
+
from pathlib import Path
|
32 |
+
|
33 |
+
from bs4 import BeautifulSoup
|
34 |
+
from tqdm import tqdm
|
35 |
+
|
36 |
+
os.environ["JUPYTER_PLATFORM_DIRS"] = "1" # fix DeprecationWarning: Jupyter is migrating to use standard platformdirs
|
37 |
+
DOCS = Path(__file__).parent.resolve()
|
38 |
+
SITE = DOCS.parent / "site"
|
39 |
+
|
40 |
+
|
41 |
+
def prepare_docs_markdown(clone_repos=True):
|
42 |
+
"""Build docs using mkdocs."""
|
43 |
+
if SITE.exists():
|
44 |
+
print(f"Removing existing {SITE}")
|
45 |
+
shutil.rmtree(SITE)
|
46 |
+
|
47 |
+
# Get hub-sdk repo
|
48 |
+
if clone_repos:
|
49 |
+
repo = "https://github.com/ultralytics/hub-sdk"
|
50 |
+
local_dir = DOCS.parent / Path(repo).name
|
51 |
+
if not local_dir.exists():
|
52 |
+
os.system(f"git clone {repo} {local_dir}")
|
53 |
+
os.system(f"git -C {local_dir} pull") # update repo
|
54 |
+
shutil.rmtree(DOCS / "en/hub/sdk", ignore_errors=True) # delete if exists
|
55 |
+
shutil.copytree(local_dir / "docs", DOCS / "en/hub/sdk") # for docs
|
56 |
+
shutil.rmtree(DOCS.parent / "hub_sdk", ignore_errors=True) # delete if exists
|
57 |
+
shutil.copytree(local_dir / "hub_sdk", DOCS.parent / "hub_sdk") # for mkdocstrings
|
58 |
+
print(f"Cloned/Updated {repo} in {local_dir}")
|
59 |
+
|
60 |
+
# Add frontmatter
|
61 |
+
for file in tqdm((DOCS / "en").rglob("*.md"), desc="Adding frontmatter"):
|
62 |
+
update_markdown_files(file)
|
63 |
+
|
64 |
+
|
65 |
+
def update_page_title(file_path: Path, new_title: str):
|
66 |
+
"""Update the title of an HTML file."""
|
67 |
+
# Read the content of the file
|
68 |
+
with open(file_path, encoding="utf-8") as file:
|
69 |
+
content = file.read()
|
70 |
+
|
71 |
+
# Replace the existing title with the new title
|
72 |
+
updated_content = re.sub(r"<title>.*?</title>", f"<title>{new_title}</title>", content)
|
73 |
+
|
74 |
+
# Write the updated content back to the file
|
75 |
+
with open(file_path, "w", encoding="utf-8") as file:
|
76 |
+
file.write(updated_content)
|
77 |
+
|
78 |
+
|
79 |
+
def update_html_head(script=""):
|
80 |
+
"""Update the HTML head section of each file."""
|
81 |
+
html_files = Path(SITE).rglob("*.html")
|
82 |
+
for html_file in tqdm(html_files, desc="Processing HTML files"):
|
83 |
+
with html_file.open("r", encoding="utf-8") as file:
|
84 |
+
html_content = file.read()
|
85 |
+
|
86 |
+
if script in html_content: # script already in HTML file
|
87 |
+
return
|
88 |
+
|
89 |
+
head_end_index = html_content.lower().rfind("</head>")
|
90 |
+
if head_end_index != -1:
|
91 |
+
# Add the specified JavaScript to the HTML file just before the end of the head tag.
|
92 |
+
new_html_content = html_content[:head_end_index] + script + html_content[head_end_index:]
|
93 |
+
with html_file.open("w", encoding="utf-8") as file:
|
94 |
+
file.write(new_html_content)
|
95 |
+
|
96 |
+
|
97 |
+
def update_subdir_edit_links(subdir="", docs_url=""):
|
98 |
+
"""Update the HTML head section of each file."""
|
99 |
+
if str(subdir[0]) == "/":
|
100 |
+
subdir = str(subdir[0])[1:]
|
101 |
+
html_files = (SITE / subdir).rglob("*.html")
|
102 |
+
for html_file in tqdm(html_files, desc="Processing subdir files"):
|
103 |
+
with html_file.open("r", encoding="utf-8") as file:
|
104 |
+
soup = BeautifulSoup(file, "html.parser")
|
105 |
+
|
106 |
+
# Find the anchor tag and update its href attribute
|
107 |
+
a_tag = soup.find("a", {"class": "md-content__button md-icon"})
|
108 |
+
if a_tag and a_tag["title"] == "Edit this page":
|
109 |
+
a_tag["href"] = f"{docs_url}{a_tag['href'].split(subdir)[-1]}"
|
110 |
+
|
111 |
+
# Write the updated HTML back to the file
|
112 |
+
with open(html_file, "w", encoding="utf-8") as file:
|
113 |
+
file.write(str(soup))
|
114 |
+
|
115 |
+
|
116 |
+
def update_markdown_files(md_filepath: Path):
|
117 |
+
"""Creates or updates a Markdown file, ensuring frontmatter is present."""
|
118 |
+
if md_filepath.exists():
|
119 |
+
content = md_filepath.read_text().strip()
|
120 |
+
|
121 |
+
# Replace apostrophes
|
122 |
+
content = content.replace("‘", "'").replace("’", "'")
|
123 |
+
|
124 |
+
# Add frontmatter if missing
|
125 |
+
if not content.strip().startswith("---\n") and "macros" not in md_filepath.parts: # skip macros directory
|
126 |
+
header = "---\ncomments: true\ndescription: TODO ADD DESCRIPTION\nkeywords: TODO ADD KEYWORDS\n---\n\n"
|
127 |
+
content = header + content
|
128 |
+
|
129 |
+
# Ensure MkDocs admonitions "=== " lines are preceded and followed by empty newlines
|
130 |
+
lines = content.split("\n")
|
131 |
+
new_lines = []
|
132 |
+
for i, line in enumerate(lines):
|
133 |
+
stripped_line = line.strip()
|
134 |
+
if stripped_line.startswith("=== "):
|
135 |
+
if i > 0 and new_lines[-1] != "":
|
136 |
+
new_lines.append("")
|
137 |
+
new_lines.append(line)
|
138 |
+
if i < len(lines) - 1 and lines[i + 1].strip() != "":
|
139 |
+
new_lines.append("")
|
140 |
+
else:
|
141 |
+
new_lines.append(line)
|
142 |
+
content = "\n".join(new_lines)
|
143 |
+
|
144 |
+
# Add EOF newline if missing
|
145 |
+
if not content.endswith("\n"):
|
146 |
+
content += "\n"
|
147 |
+
|
148 |
+
# Save page
|
149 |
+
md_filepath.write_text(content)
|
150 |
+
return
|
151 |
+
|
152 |
+
|
153 |
+
def update_docs_html():
|
154 |
+
"""Updates titles, edit links, head sections, and converts plaintext links in HTML documentation."""
|
155 |
+
# Update 404 titles
|
156 |
+
update_page_title(SITE / "404.html", new_title="Ultralytics Docs - Not Found")
|
157 |
+
|
158 |
+
# Update edit links
|
159 |
+
update_subdir_edit_links(
|
160 |
+
subdir="hub/sdk/", # do not use leading slash
|
161 |
+
docs_url="https://github.com/ultralytics/hub-sdk/tree/main/docs/",
|
162 |
+
)
|
163 |
+
|
164 |
+
# Convert plaintext links to HTML hyperlinks
|
165 |
+
files_modified = 0
|
166 |
+
for html_file in tqdm(SITE.rglob("*.html"), desc="Converting plaintext links"):
|
167 |
+
with open(html_file, encoding="utf-8") as file:
|
168 |
+
content = file.read()
|
169 |
+
updated_content = convert_plaintext_links_to_html(content)
|
170 |
+
if updated_content != content:
|
171 |
+
with open(html_file, "w", encoding="utf-8") as file:
|
172 |
+
file.write(updated_content)
|
173 |
+
files_modified += 1
|
174 |
+
print(f"Modified plaintext links in {files_modified} files.")
|
175 |
+
|
176 |
+
# Update HTML file head section
|
177 |
+
script = ""
|
178 |
+
if any(script):
|
179 |
+
update_html_head(script)
|
180 |
+
|
181 |
+
# Delete the /macros directory from the built site
|
182 |
+
macros_dir = SITE / "macros"
|
183 |
+
if macros_dir.exists():
|
184 |
+
print(f"Removing /macros directory from site: {macros_dir}")
|
185 |
+
shutil.rmtree(macros_dir)
|
186 |
+
|
187 |
+
|
188 |
+
def convert_plaintext_links_to_html(content):
|
189 |
+
"""Convert plaintext links to HTML hyperlinks in the main content area only."""
|
190 |
+
soup = BeautifulSoup(content, "html.parser")
|
191 |
+
|
192 |
+
# Find the main content area (adjust this selector based on your HTML structure)
|
193 |
+
main_content = soup.find("main") or soup.find("div", class_="md-content")
|
194 |
+
if not main_content:
|
195 |
+
return content # Return original content if main content area not found
|
196 |
+
|
197 |
+
modified = False
|
198 |
+
for paragraph in main_content.find_all(["p", "li"]): # Focus on paragraphs and list items
|
199 |
+
for text_node in paragraph.find_all(string=True, recursive=False):
|
200 |
+
if text_node.parent.name not in {"a", "code"}: # Ignore links and code blocks
|
201 |
+
new_text = re.sub(
|
202 |
+
r'(https?://[^\s()<>]+(?:\.[^\s()<>]+)+)(?<![.,:;\'"])',
|
203 |
+
r'<a href="\1">\1</a>',
|
204 |
+
str(text_node),
|
205 |
+
)
|
206 |
+
if "<a" in new_text:
|
207 |
+
new_soup = BeautifulSoup(new_text, "html.parser")
|
208 |
+
text_node.replace_with(new_soup)
|
209 |
+
modified = True
|
210 |
+
|
211 |
+
return str(soup) if modified else content
|
212 |
+
|
213 |
+
|
214 |
+
def remove_macros():
|
215 |
+
"""Removes the /macros directory and related entries in sitemap.xml from the built site."""
|
216 |
+
shutil.rmtree(SITE / "macros", ignore_errors=True)
|
217 |
+
(SITE / "sitemap.xml.gz").unlink(missing_ok=True)
|
218 |
+
|
219 |
+
# Process sitemap.xml
|
220 |
+
sitemap = SITE / "sitemap.xml"
|
221 |
+
lines = sitemap.read_text(encoding="utf-8").splitlines(keepends=True)
|
222 |
+
|
223 |
+
# Find indices of '/macros/' lines
|
224 |
+
macros_indices = [i for i, line in enumerate(lines) if "/macros/" in line]
|
225 |
+
|
226 |
+
# Create a set of indices to remove (including lines before and after)
|
227 |
+
indices_to_remove = set()
|
228 |
+
for i in macros_indices:
|
229 |
+
indices_to_remove.update(range(i - 1, i + 3)) # i-1, i, i+1, i+2, i+3
|
230 |
+
|
231 |
+
# Create new list of lines, excluding the ones to remove
|
232 |
+
new_lines = [line for i, line in enumerate(lines) if i not in indices_to_remove]
|
233 |
+
|
234 |
+
# Write the cleaned content back to the file
|
235 |
+
sitemap.write_text("".join(new_lines), encoding="utf-8")
|
236 |
+
|
237 |
+
print(f"Removed {len(macros_indices)} URLs containing '/macros/' from {sitemap}")
|
238 |
+
|
239 |
+
|
240 |
+
def main():
|
241 |
+
"""Builds docs, updates titles and edit links, and prints local server command."""
|
242 |
+
prepare_docs_markdown()
|
243 |
+
|
244 |
+
# Build the main documentation
|
245 |
+
print(f"Building docs from {DOCS}")
|
246 |
+
subprocess.run(f"mkdocs build -f {DOCS.parent}/mkdocs.yml --strict", check=True, shell=True)
|
247 |
+
remove_macros()
|
248 |
+
print(f"Site built at {SITE}")
|
249 |
+
|
250 |
+
# Update docs HTML pages
|
251 |
+
update_docs_html()
|
252 |
+
|
253 |
+
# Show command to serve built website
|
254 |
+
print('Docs built correctly ✅\nServe site at http://localhost:8000 with "python -m http.server --directory site"')
|
255 |
+
|
256 |
+
|
257 |
+
if __name__ == "__main__":
|
258 |
+
main()
|
docs/build_reference.py
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
2 |
+
"""
|
3 |
+
Helper file to build Ultralytics Docs reference section. Recursively walks through ultralytics dir and builds an MkDocs
|
4 |
+
reference section of *.md files composed of classes and functions, and also creates a nav menu for use in mkdocs.yaml.
|
5 |
+
|
6 |
+
Note: Must be run from repository root directory. Do not run from docs directory.
|
7 |
+
"""
|
8 |
+
|
9 |
+
import re
|
10 |
+
import subprocess
|
11 |
+
from collections import defaultdict
|
12 |
+
from pathlib import Path
|
13 |
+
|
14 |
+
# Constants
|
15 |
+
hub_sdk = False
|
16 |
+
if hub_sdk:
|
17 |
+
PACKAGE_DIR = Path("/Users/glennjocher/PycharmProjects/hub-sdk/hub_sdk")
|
18 |
+
REFERENCE_DIR = PACKAGE_DIR.parent / "docs/reference"
|
19 |
+
GITHUB_REPO = "ultralytics/hub-sdk"
|
20 |
+
else:
|
21 |
+
FILE = Path(__file__).resolve()
|
22 |
+
PACKAGE_DIR = FILE.parents[1] / "ultralytics" # i.e. /Users/glennjocher/PycharmProjects/ultralytics/ultralytics
|
23 |
+
REFERENCE_DIR = PACKAGE_DIR.parent / "docs/en/reference"
|
24 |
+
GITHUB_REPO = "ultralytics/ultralytics"
|
25 |
+
|
26 |
+
|
27 |
+
def extract_classes_and_functions(filepath: Path) -> tuple:
|
28 |
+
"""Extracts class and function names from a given Python file."""
|
29 |
+
content = filepath.read_text()
|
30 |
+
class_pattern = r"(?:^|\n)class\s(\w+)(?:\(|:)"
|
31 |
+
func_pattern = r"(?:^|\n)def\s(\w+)\("
|
32 |
+
|
33 |
+
classes = re.findall(class_pattern, content)
|
34 |
+
functions = re.findall(func_pattern, content)
|
35 |
+
|
36 |
+
return classes, functions
|
37 |
+
|
38 |
+
|
39 |
+
def create_markdown(py_filepath: Path, module_path: str, classes: list, functions: list):
|
40 |
+
"""Creates a Markdown file containing the API reference for the given Python module."""
|
41 |
+
md_filepath = py_filepath.with_suffix(".md")
|
42 |
+
exists = md_filepath.exists()
|
43 |
+
|
44 |
+
# Read existing content and keep header content between first two ---
|
45 |
+
header_content = ""
|
46 |
+
if exists:
|
47 |
+
existing_content = md_filepath.read_text()
|
48 |
+
header_parts = existing_content.split("---")
|
49 |
+
for part in header_parts:
|
50 |
+
if "description:" in part or "comments:" in part:
|
51 |
+
header_content += f"---{part}---\n\n"
|
52 |
+
if not any(header_content):
|
53 |
+
header_content = "---\ndescription: TODO ADD DESCRIPTION\nkeywords: TODO ADD KEYWORDS\n---\n\n"
|
54 |
+
|
55 |
+
module_name = module_path.replace(".__init__", "")
|
56 |
+
module_path = module_path.replace(".", "/")
|
57 |
+
url = f"https://github.com/{GITHUB_REPO}/blob/main/{module_path}.py"
|
58 |
+
edit = f"https://github.com/{GITHUB_REPO}/edit/main/{module_path}.py"
|
59 |
+
pretty = url.replace("__init__.py", "\\_\\_init\\_\\_.py") # properly display __init__.py filenames
|
60 |
+
title_content = (
|
61 |
+
f"# Reference for `{module_path}.py`\n\n"
|
62 |
+
f"!!! note\n\n"
|
63 |
+
f" This file is available at [{pretty}]({url}). If you spot a problem please help fix it by [contributing]"
|
64 |
+
f"(https://docs.ultralytics.com/help/contributing/) a [Pull Request]({edit}) 🛠️. Thank you 🙏!\n\n"
|
65 |
+
)
|
66 |
+
md_content = ["<br>\n"] + [f"## ::: {module_name}.{class_name}\n\n<br><br><hr><br>\n" for class_name in classes]
|
67 |
+
md_content.extend(f"## ::: {module_name}.{func_name}\n\n<br><br><hr><br>\n" for func_name in functions)
|
68 |
+
md_content[-1] = md_content[-1].replace("<hr><br>", "") # remove last horizontal line
|
69 |
+
md_content = header_content + title_content + "\n".join(md_content)
|
70 |
+
if not md_content.endswith("\n"):
|
71 |
+
md_content += "\n"
|
72 |
+
|
73 |
+
md_filepath.parent.mkdir(parents=True, exist_ok=True)
|
74 |
+
md_filepath.write_text(md_content)
|
75 |
+
|
76 |
+
if not exists:
|
77 |
+
# Add new markdown file to the git staging area
|
78 |
+
print(f"Created new file '{md_filepath}'")
|
79 |
+
subprocess.run(["git", "add", "-f", str(md_filepath)], check=True, cwd=PACKAGE_DIR)
|
80 |
+
|
81 |
+
return md_filepath.relative_to(PACKAGE_DIR.parent)
|
82 |
+
|
83 |
+
|
84 |
+
def nested_dict() -> defaultdict:
|
85 |
+
"""Creates and returns a nested defaultdict."""
|
86 |
+
return defaultdict(nested_dict)
|
87 |
+
|
88 |
+
|
89 |
+
def sort_nested_dict(d: dict) -> dict:
|
90 |
+
"""Sorts a nested dictionary recursively."""
|
91 |
+
return {key: sort_nested_dict(value) if isinstance(value, dict) else value for key, value in sorted(d.items())}
|
92 |
+
|
93 |
+
|
94 |
+
def create_nav_menu_yaml(nav_items: list, save: bool = False):
|
95 |
+
"""Creates a YAML file for the navigation menu based on the provided list of items."""
|
96 |
+
nav_tree = nested_dict()
|
97 |
+
|
98 |
+
for item_str in nav_items:
|
99 |
+
item = Path(item_str)
|
100 |
+
parts = item.parts
|
101 |
+
current_level = nav_tree["reference"]
|
102 |
+
for part in parts[2:-1]: # skip the first two parts (docs and reference) and the last part (filename)
|
103 |
+
current_level = current_level[part]
|
104 |
+
|
105 |
+
md_file_name = parts[-1].replace(".md", "")
|
106 |
+
current_level[md_file_name] = item
|
107 |
+
|
108 |
+
nav_tree_sorted = sort_nested_dict(nav_tree)
|
109 |
+
|
110 |
+
def _dict_to_yaml(d, level=0):
|
111 |
+
"""Converts a nested dictionary to a YAML-formatted string with indentation."""
|
112 |
+
yaml_str = ""
|
113 |
+
indent = " " * level
|
114 |
+
for k, v in d.items():
|
115 |
+
if isinstance(v, dict):
|
116 |
+
yaml_str += f"{indent}- {k}:\n{_dict_to_yaml(v, level + 1)}"
|
117 |
+
else:
|
118 |
+
yaml_str += f"{indent}- {k}: {str(v).replace('docs/en/', '')}\n"
|
119 |
+
return yaml_str
|
120 |
+
|
121 |
+
# Print updated YAML reference section
|
122 |
+
print("Scan complete, new mkdocs.yaml reference section is:\n\n", _dict_to_yaml(nav_tree_sorted))
|
123 |
+
|
124 |
+
# Save new YAML reference section
|
125 |
+
if save:
|
126 |
+
(PACKAGE_DIR.parent / "nav_menu_updated.yml").write_text(_dict_to_yaml(nav_tree_sorted))
|
127 |
+
|
128 |
+
|
129 |
+
def main():
|
130 |
+
"""Main function to extract class and function names, create Markdown files, and generate a YAML navigation menu."""
|
131 |
+
nav_items = []
|
132 |
+
|
133 |
+
for py_filepath in PACKAGE_DIR.rglob("*.py"):
|
134 |
+
classes, functions = extract_classes_and_functions(py_filepath)
|
135 |
+
|
136 |
+
if classes or functions:
|
137 |
+
py_filepath_rel = py_filepath.relative_to(PACKAGE_DIR)
|
138 |
+
md_filepath = REFERENCE_DIR / py_filepath_rel
|
139 |
+
module_path = f"{PACKAGE_DIR.name}.{py_filepath_rel.with_suffix('').as_posix().replace('/', '.')}"
|
140 |
+
md_rel_filepath = create_markdown(md_filepath, module_path, classes, functions)
|
141 |
+
nav_items.append(str(md_rel_filepath))
|
142 |
+
|
143 |
+
create_nav_menu_yaml(nav_items)
|
144 |
+
|
145 |
+
|
146 |
+
if __name__ == "__main__":
|
147 |
+
main()
|
docs/coming_soon_template.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
description: Discover what's next for Ultralytics with our under-construction page, previewing new, groundbreaking AI and ML features coming soon.
|
3 |
+
keywords: Ultralytics, coming soon, under construction, new features, AI updates, ML advancements, YOLO, technology preview
|
4 |
+
---
|
5 |
+
|
6 |
+
# Under Construction 🏗️🌟
|
7 |
+
|
8 |
+
Welcome to the [Ultralytics](https://www.ultralytics.com/) "Under Construction" page! Here, we're hard at work developing the next generation of AI and ML innovations. This page serves as a teaser for the exciting updates and new features we're eager to share with you!
|
9 |
+
|
10 |
+
## Exciting New Features on the Way 🎉
|
11 |
+
|
12 |
+
- **Innovative Breakthroughs:** Get ready for advanced features and services that will transform your AI and ML experience.
|
13 |
+
- **New Horizons:** Anticipate novel products that redefine AI and ML capabilities.
|
14 |
+
- **Enhanced Services:** We're upgrading our services for greater efficiency and user-friendliness.
|
15 |
+
|
16 |
+
## Stay Updated 🚧
|
17 |
+
|
18 |
+
This placeholder page is your first stop for upcoming developments. Keep an eye out for:
|
19 |
+
|
20 |
+
- **Newsletter:** Subscribe [here](https://www.ultralytics.com/#newsletter) for the latest news.
|
21 |
+
- **Social Media:** Follow us [here](https://www.linkedin.com/company/ultralytics) for updates and teasers.
|
22 |
+
- **Blog:** Visit our [blog](https://www.ultralytics.com/blog) for detailed insights.
|
23 |
+
|
24 |
+
## We Value Your Input 🗣️
|
25 |
+
|
26 |
+
Your feedback shapes our future releases. Share your thoughts and suggestions [here](https://www.ultralytics.com/survey).
|
27 |
+
|
28 |
+
## Thank You, Community! 🌍
|
29 |
+
|
30 |
+
Your [contributions](https://docs.ultralytics.com/help/contributing/) inspire our continuous [innovation](https://github.com/ultralytics/ultralytics). Stay tuned for the big reveal of what's next in AI and ML at Ultralytics!
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
Excited for what's coming? Bookmark this page and get ready for a transformative AI and ML journey with Ultralytics! 🛠️🤖
|
docs/en/CNAME
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
docs.ultralytics.com
|
docs/en/datasets/classify/caltech101.md
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the widely-used Caltech-101 dataset with 9,000 images across 101 categories. Ideal for object recognition tasks in machine learning and computer vision.
|
4 |
+
keywords: Caltech-101, dataset, object recognition, machine learning, computer vision, YOLO, deep learning, research, AI
|
5 |
+
---
|
6 |
+
|
7 |
+
# Caltech-101 Dataset
|
8 |
+
|
9 |
+
The [Caltech-101](https://data.caltech.edu/records/mzrjq-6wc02) dataset is a widely used dataset for object recognition tasks, containing around 9,000 images from 101 object categories. The categories were chosen to reflect a variety of real-world objects, and the images themselves were carefully selected and annotated to provide a challenging benchmark for object recognition algorithms.
|
10 |
+
|
11 |
+
## Key Features
|
12 |
+
|
13 |
+
- The Caltech-101 dataset comprises around 9,000 color images divided into 101 categories.
|
14 |
+
- The categories encompass a wide variety of objects, including animals, vehicles, household items, and people.
|
15 |
+
- The number of images per category varies, with about 40 to 800 images in each category.
|
16 |
+
- Images are of variable sizes, with most images being medium resolution.
|
17 |
+
- Caltech-101 is widely used for training and testing in the field of machine learning, particularly for object recognition tasks.
|
18 |
+
|
19 |
+
## Dataset Structure
|
20 |
+
|
21 |
+
Unlike many other datasets, the Caltech-101 dataset is not formally split into training and testing sets. Users typically create their own splits based on their specific needs. However, a common practice is to use a random subset of images for training (e.g., 30 images per category) and the remaining images for testing.
|
22 |
+
|
23 |
+
## Applications
|
24 |
+
|
25 |
+
The Caltech-101 dataset is extensively used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in object recognition tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), Support Vector Machines (SVMs), and various other machine learning algorithms. Its wide variety of categories and high-quality images make it an excellent dataset for research and development in the field of machine learning and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv).
|
26 |
+
|
27 |
+
## Usage
|
28 |
+
|
29 |
+
To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
30 |
+
|
31 |
+
!!! example "Train Example"
|
32 |
+
|
33 |
+
=== "Python"
|
34 |
+
|
35 |
+
```python
|
36 |
+
from ultralytics import YOLO
|
37 |
+
|
38 |
+
# Load a model
|
39 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
40 |
+
|
41 |
+
# Train the model
|
42 |
+
results = model.train(data="caltech101", epochs=100, imgsz=416)
|
43 |
+
```
|
44 |
+
|
45 |
+
=== "CLI"
|
46 |
+
|
47 |
+
```bash
|
48 |
+
# Start training from a pretrained *.pt model
|
49 |
+
yolo classify train data=caltech101 model=yolo11n-cls.pt epochs=100 imgsz=416
|
50 |
+
```
|
51 |
+
|
52 |
+
## Sample Images and Annotations
|
53 |
+
|
54 |
+
The Caltech-101 dataset contains high-quality color images of various objects, providing a well-structured dataset for object recognition tasks. Here are some examples of images from the dataset:
|
55 |
+
|
56 |
+

|
57 |
+
|
58 |
+
The example showcases the variety and complexity of the objects in the Caltech-101 dataset, emphasizing the significance of a diverse dataset for training robust object recognition models.
|
59 |
+
|
60 |
+
## Citations and Acknowledgments
|
61 |
+
|
62 |
+
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
|
63 |
+
|
64 |
+
!!! quote ""
|
65 |
+
|
66 |
+
=== "BibTeX"
|
67 |
+
|
68 |
+
```bibtex
|
69 |
+
@article{fei2007learning,
|
70 |
+
title={Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories},
|
71 |
+
author={Fei-Fei, Li and Fergus, Rob and Perona, Pietro},
|
72 |
+
journal={Computer vision and Image understanding},
|
73 |
+
volume={106},
|
74 |
+
number={1},
|
75 |
+
pages={59--70},
|
76 |
+
year={2007},
|
77 |
+
publisher={Elsevier}
|
78 |
+
}
|
79 |
+
```
|
80 |
+
|
81 |
+
We would like to acknowledge Li Fei-Fei, Rob Fergus, and Pietro Perona for creating and maintaining the Caltech-101 dataset as a valuable resource for the machine learning and computer vision research community. For more information about the Caltech-101 dataset and its creators, visit the [Caltech-101 dataset website](https://data.caltech.edu/records/mzrjq-6wc02).
|
82 |
+
|
83 |
+
## FAQ
|
84 |
+
|
85 |
+
### What is the Caltech-101 dataset used for in machine learning?
|
86 |
+
|
87 |
+
The [Caltech-101](https://data.caltech.edu/records/mzrjq-6wc02) dataset is widely used in machine learning for object recognition tasks. It contains around 9,000 images across 101 categories, providing a challenging benchmark for evaluating object recognition algorithms. Researchers leverage it to train and test models, especially Convolutional [Neural Networks](https://www.ultralytics.com/glossary/neural-network-nn) (CNNs) and [Support Vector Machines](https://www.ultralytics.com/glossary/support-vector-machine-svm) (SVMs), in computer vision.
|
88 |
+
|
89 |
+
### How can I train an Ultralytics YOLO model on the Caltech-101 dataset?
|
90 |
+
|
91 |
+
To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the provided code snippets. For example, to train for 100 [epochs](https://www.ultralytics.com/glossary/epoch):
|
92 |
+
|
93 |
+
!!! example "Train Example"
|
94 |
+
|
95 |
+
=== "Python"
|
96 |
+
|
97 |
+
```python
|
98 |
+
from ultralytics import YOLO
|
99 |
+
|
100 |
+
# Load a model
|
101 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
102 |
+
|
103 |
+
# Train the model
|
104 |
+
results = model.train(data="caltech101", epochs=100, imgsz=416)
|
105 |
+
```
|
106 |
+
|
107 |
+
=== "CLI"
|
108 |
+
|
109 |
+
```bash
|
110 |
+
# Start training from a pretrained *.pt model
|
111 |
+
yolo classify train data=caltech101 model=yolo11n-cls.pt epochs=100 imgsz=416
|
112 |
+
```
|
113 |
+
|
114 |
+
For more detailed arguments and options, refer to the model [Training](../../modes/train.md) page.
|
115 |
+
|
116 |
+
### What are the key features of the Caltech-101 dataset?
|
117 |
+
|
118 |
+
The Caltech-101 dataset includes:
|
119 |
+
|
120 |
+
- Around 9,000 color images across 101 categories.
|
121 |
+
- Categories covering a diverse range of objects, including animals, vehicles, and household items.
|
122 |
+
- Variable number of images per category, typically between 40 and 800.
|
123 |
+
- Variable image sizes, with most being medium resolution.
|
124 |
+
|
125 |
+
These features make it an excellent choice for training and evaluating object recognition models in [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision.
|
126 |
+
|
127 |
+
### Why should I cite the Caltech-101 dataset in my research?
|
128 |
+
|
129 |
+
Citing the Caltech-101 dataset in your research acknowledges the creators' contributions and provides a reference for others who might use the dataset. The recommended citation is:
|
130 |
+
|
131 |
+
!!! quote ""
|
132 |
+
|
133 |
+
=== "BibTeX"
|
134 |
+
|
135 |
+
```bibtex
|
136 |
+
@article{fei2007learning,
|
137 |
+
title={Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories},
|
138 |
+
author={Fei-Fei, Li and Fergus, Rob and Perona, Pietro},
|
139 |
+
journal={Computer vision and Image understanding},
|
140 |
+
volume={106},
|
141 |
+
number={1},
|
142 |
+
pages={59--70},
|
143 |
+
year={2007},
|
144 |
+
publisher={Elsevier}
|
145 |
+
}
|
146 |
+
```
|
147 |
+
|
148 |
+
Citing helps in maintaining the integrity of academic work and assists peers in locating the original resource.
|
149 |
+
|
150 |
+
### Can I use Ultralytics HUB for training models on the Caltech-101 dataset?
|
151 |
+
|
152 |
+
Yes, you can use Ultralytics HUB for training models on the Caltech-101 dataset. Ultralytics HUB provides an intuitive platform for managing datasets, training models, and deploying them without extensive coding. For a detailed guide, refer to the [how to train your custom models with Ultralytics HUB](https://www.ultralytics.com/blog/how-to-train-your-custom-models-with-ultralytics-hub) blog post.
|
docs/en/datasets/classify/caltech256.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the Caltech-256 dataset, featuring 30,000 images across 257 categories, ideal for training and testing object recognition algorithms.
|
4 |
+
keywords: Caltech-256 dataset, object classification, image dataset, machine learning, computer vision, deep learning, YOLO, training dataset
|
5 |
+
---
|
6 |
+
|
7 |
+
# Caltech-256 Dataset
|
8 |
+
|
9 |
+
The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is an extensive collection of images used for object classification tasks. It contains around 30,000 images divided into 257 categories (256 object categories and 1 background category). The images are carefully curated and annotated to provide a challenging and diverse benchmark for object recognition algorithms.
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<br>
|
13 |
+
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/isc06_9qnM0"
|
14 |
+
title="YouTube video player" frameborder="0"
|
15 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
16 |
+
allowfullscreen>
|
17 |
+
</iframe>
|
18 |
+
<br>
|
19 |
+
<strong>Watch:</strong> How to Train <a href="https://www.ultralytics.com/glossary/image-classification">Image Classification</a> Model using Caltech-256 Dataset with Ultralytics HUB
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## Key Features
|
23 |
+
|
24 |
+
- The Caltech-256 dataset comprises around 30,000 color images divided into 257 categories.
|
25 |
+
- Each category contains a minimum of 80 images.
|
26 |
+
- The categories encompass a wide variety of real-world objects, including animals, vehicles, household items, and people.
|
27 |
+
- Images are of variable sizes and resolutions.
|
28 |
+
- Caltech-256 is widely used for training and testing in the field of machine learning, particularly for object recognition tasks.
|
29 |
+
|
30 |
+
## Dataset Structure
|
31 |
+
|
32 |
+
Like Caltech-101, the Caltech-256 dataset does not have a formal split between training and testing sets. Users typically create their own splits according to their specific needs. A common practice is to use a random subset of images for training and the remaining images for testing.
|
33 |
+
|
34 |
+
## Applications
|
35 |
+
|
36 |
+
The Caltech-256 dataset is extensively used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in object recognition tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), Support Vector Machines (SVMs), and various other machine learning algorithms. Its diverse set of categories and high-quality images make it an invaluable dataset for research and development in the field of machine learning and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv).
|
37 |
+
|
38 |
+
## Usage
|
39 |
+
|
40 |
+
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
41 |
+
|
42 |
+
!!! example "Train Example"
|
43 |
+
|
44 |
+
=== "Python"
|
45 |
+
|
46 |
+
```python
|
47 |
+
from ultralytics import YOLO
|
48 |
+
|
49 |
+
# Load a model
|
50 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
51 |
+
|
52 |
+
# Train the model
|
53 |
+
results = model.train(data="caltech256", epochs=100, imgsz=416)
|
54 |
+
```
|
55 |
+
|
56 |
+
=== "CLI"
|
57 |
+
|
58 |
+
```bash
|
59 |
+
# Start training from a pretrained *.pt model
|
60 |
+
yolo classify train data=caltech256 model=yolo11n-cls.pt epochs=100 imgsz=416
|
61 |
+
```
|
62 |
+
|
63 |
+
## Sample Images and Annotations
|
64 |
+
|
65 |
+
The Caltech-256 dataset contains high-quality color images of various objects, providing a comprehensive dataset for object recognition tasks. Here are some examples of images from the dataset ([credit](https://ml4a.github.io/demos/tsne_viewer.html)):
|
66 |
+
|
67 |
+

|
68 |
+
|
69 |
+
The example showcases the diversity and complexity of the objects in the Caltech-256 dataset, emphasizing the importance of a varied dataset for training robust object recognition models.
|
70 |
+
|
71 |
+
## Citations and Acknowledgments
|
72 |
+
|
73 |
+
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
|
74 |
+
|
75 |
+
!!! quote ""
|
76 |
+
|
77 |
+
=== "BibTeX"
|
78 |
+
|
79 |
+
```bibtex
|
80 |
+
@article{griffin2007caltech,
|
81 |
+
title={Caltech-256 object category dataset},
|
82 |
+
author={Griffin, Gregory and Holub, Alex and Perona, Pietro},
|
83 |
+
year={2007}
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
We would like to acknowledge Gregory Griffin, Alex Holub, and Pietro Perona for creating and maintaining the Caltech-256 dataset as a valuable resource for the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision research community. For more information about the
|
88 |
+
|
89 |
+
Caltech-256 dataset and its creators, visit the [Caltech-256 dataset website](https://data.caltech.edu/records/nyy15-4j048).
|
90 |
+
|
91 |
+
## FAQ
|
92 |
+
|
93 |
+
### What is the Caltech-256 dataset and why is it important for machine learning?
|
94 |
+
|
95 |
+
The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is a large image dataset used primarily for object classification tasks in machine learning and computer vision. It consists of around 30,000 color images divided into 257 categories, covering a wide range of real-world objects. The dataset's diverse and high-quality images make it an excellent benchmark for evaluating object recognition algorithms, which is crucial for developing robust machine learning models.
|
96 |
+
|
97 |
+
### How can I train a YOLO model on the Caltech-256 dataset using Python or CLI?
|
98 |
+
|
99 |
+
To train a YOLO model on the Caltech-256 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch), you can use the following code snippets. Refer to the model [Training](../../modes/train.md) page for additional options.
|
100 |
+
|
101 |
+
!!! example "Train Example"
|
102 |
+
|
103 |
+
=== "Python"
|
104 |
+
|
105 |
+
```python
|
106 |
+
from ultralytics import YOLO
|
107 |
+
|
108 |
+
# Load a model
|
109 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model
|
110 |
+
|
111 |
+
# Train the model
|
112 |
+
results = model.train(data="caltech256", epochs=100, imgsz=416)
|
113 |
+
```
|
114 |
+
|
115 |
+
=== "CLI"
|
116 |
+
|
117 |
+
```bash
|
118 |
+
# Start training from a pretrained *.pt model
|
119 |
+
yolo classify train data=caltech256 model=yolo11n-cls.pt epochs=100 imgsz=416
|
120 |
+
```
|
121 |
+
|
122 |
+
### What are the most common use cases for the Caltech-256 dataset?
|
123 |
+
|
124 |
+
The Caltech-256 dataset is widely used for various object recognition tasks such as:
|
125 |
+
|
126 |
+
- Training Convolutional [Neural Networks](https://www.ultralytics.com/glossary/neural-network-nn) (CNNs)
|
127 |
+
- Evaluating the performance of [Support Vector Machines](https://www.ultralytics.com/glossary/support-vector-machine-svm) (SVMs)
|
128 |
+
- Benchmarking new deep learning algorithms
|
129 |
+
- Developing [object detection](https://www.ultralytics.com/glossary/object-detection) models using frameworks like Ultralytics YOLO
|
130 |
+
|
131 |
+
Its diversity and comprehensive annotations make it ideal for research and development in machine learning and computer vision.
|
132 |
+
|
133 |
+
### How is the Caltech-256 dataset structured and split for training and testing?
|
134 |
+
|
135 |
+
The Caltech-256 dataset does not come with a predefined split for training and testing. Users typically create their own splits according to their specific needs. A common approach is to randomly select a subset of images for training and use the remaining images for testing. This flexibility allows users to tailor the dataset to their specific project requirements and experimental setups.
|
136 |
+
|
137 |
+
### Why should I use Ultralytics YOLO for training models on the Caltech-256 dataset?
|
138 |
+
|
139 |
+
Ultralytics YOLO models offer several advantages for training on the Caltech-256 dataset:
|
140 |
+
|
141 |
+
- **High Accuracy**: YOLO models are known for their state-of-the-art performance in object detection tasks.
|
142 |
+
- **Speed**: They provide real-time inference capabilities, making them suitable for applications requiring quick predictions.
|
143 |
+
- **Ease of Use**: With Ultralytics HUB, users can train, validate, and deploy models without extensive coding.
|
144 |
+
- **Pretrained Models**: Starting from pretrained models, like `yolo11n-cls.pt`, can significantly reduce training time and improve model [accuracy](https://www.ultralytics.com/glossary/accuracy).
|
145 |
+
|
146 |
+
For more details, explore our [comprehensive training guide](../../modes/train.md).
|
docs/en/datasets/classify/cifar10.md
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the CIFAR-10 dataset, featuring 60,000 color images in 10 classes. Learn about its structure, applications, and how to train models using YOLO.
|
4 |
+
keywords: CIFAR-10, dataset, machine learning, computer vision, image classification, YOLO, deep learning, neural networks
|
5 |
+
---
|
6 |
+
|
7 |
+
# CIFAR-10 Dataset
|
8 |
+
|
9 |
+
The [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) (Canadian Institute For Advanced Research) dataset is a collection of images used widely for [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision algorithms. It was developed by researchers at the CIFAR institute and consists of 60,000 32x32 color images in 10 different classes.
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<br>
|
13 |
+
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/fLBbyhPbWzY"
|
14 |
+
title="YouTube video player" frameborder="0"
|
15 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
16 |
+
allowfullscreen>
|
17 |
+
</iframe>
|
18 |
+
<br>
|
19 |
+
<strong>Watch:</strong> How to Train an <a href="https://www.ultralytics.com/glossary/image-classification">Image Classification</a> Model with CIFAR-10 Dataset using Ultralytics YOLO11
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## Key Features
|
23 |
+
|
24 |
+
- The CIFAR-10 dataset consists of 60,000 images, divided into 10 classes.
|
25 |
+
- Each class contains 6,000 images, split into 5,000 for training and 1,000 for testing.
|
26 |
+
- The images are colored and of size 32x32 pixels.
|
27 |
+
- The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks.
|
28 |
+
- CIFAR-10 is commonly used for training and testing in the field of machine learning and computer vision.
|
29 |
+
|
30 |
+
## Dataset Structure
|
31 |
+
|
32 |
+
The CIFAR-10 dataset is split into two subsets:
|
33 |
+
|
34 |
+
1. **Training Set**: This subset contains 50,000 images used for training machine learning models.
|
35 |
+
2. **Testing Set**: This subset consists of 10,000 images used for testing and benchmarking the trained models.
|
36 |
+
|
37 |
+
## Applications
|
38 |
+
|
39 |
+
The CIFAR-10 dataset is widely used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in image classification tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), Support Vector Machines (SVMs), and various other machine learning algorithms. The diversity of the dataset in terms of classes and the presence of color images make it a well-rounded dataset for research and development in the field of machine learning and computer vision.
|
40 |
+
|
41 |
+
## Usage
|
42 |
+
|
43 |
+
To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
44 |
+
|
45 |
+
!!! example "Train Example"
|
46 |
+
|
47 |
+
=== "Python"
|
48 |
+
|
49 |
+
```python
|
50 |
+
from ultralytics import YOLO
|
51 |
+
|
52 |
+
# Load a model
|
53 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
54 |
+
|
55 |
+
# Train the model
|
56 |
+
results = model.train(data="cifar10", epochs=100, imgsz=32)
|
57 |
+
```
|
58 |
+
|
59 |
+
=== "CLI"
|
60 |
+
|
61 |
+
```bash
|
62 |
+
# Start training from a pretrained *.pt model
|
63 |
+
yolo classify train data=cifar10 model=yolo11n-cls.pt epochs=100 imgsz=32
|
64 |
+
```
|
65 |
+
|
66 |
+
## Sample Images and Annotations
|
67 |
+
|
68 |
+
The CIFAR-10 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
|
69 |
+
|
70 |
+

|
71 |
+
|
72 |
+
The example showcases the variety and complexity of the objects in the CIFAR-10 dataset, highlighting the importance of a diverse dataset for training robust image classification models.
|
73 |
+
|
74 |
+
## Citations and Acknowledgments
|
75 |
+
|
76 |
+
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
|
77 |
+
|
78 |
+
!!! quote ""
|
79 |
+
|
80 |
+
=== "BibTeX"
|
81 |
+
|
82 |
+
```bibtex
|
83 |
+
@TECHREPORT{Krizhevsky09learningmultiple,
|
84 |
+
author={Alex Krizhevsky},
|
85 |
+
title={Learning multiple layers of features from tiny images},
|
86 |
+
institution={},
|
87 |
+
year={2009}
|
88 |
+
}
|
89 |
+
```
|
90 |
+
|
91 |
+
We would like to acknowledge Alex Krizhevsky for creating and maintaining the CIFAR-10 dataset as a valuable resource for the machine learning and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) research community. For more information about the CIFAR-10 dataset and its creator, visit the [CIFAR-10 dataset website](https://www.cs.toronto.edu/~kriz/cifar.html).
|
92 |
+
|
93 |
+
## FAQ
|
94 |
+
|
95 |
+
### How can I train a YOLO model on the CIFAR-10 dataset?
|
96 |
+
|
97 |
+
To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow the examples provided for both Python and CLI. Here is a basic example to train your model for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 32x32 pixels:
|
98 |
+
|
99 |
+
!!! example
|
100 |
+
|
101 |
+
=== "Python"
|
102 |
+
|
103 |
+
```python
|
104 |
+
from ultralytics import YOLO
|
105 |
+
|
106 |
+
# Load a model
|
107 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
108 |
+
|
109 |
+
# Train the model
|
110 |
+
results = model.train(data="cifar10", epochs=100, imgsz=32)
|
111 |
+
```
|
112 |
+
|
113 |
+
=== "CLI"
|
114 |
+
|
115 |
+
```bash
|
116 |
+
# Start training from a pretrained *.pt model
|
117 |
+
yolo classify train data=cifar10 model=yolo11n-cls.pt epochs=100 imgsz=32
|
118 |
+
```
|
119 |
+
|
120 |
+
For more details, refer to the model [Training](../../modes/train.md) page.
|
121 |
+
|
122 |
+
### What are the key features of the CIFAR-10 dataset?
|
123 |
+
|
124 |
+
The CIFAR-10 dataset consists of 60,000 color images divided into 10 classes. Each class contains 6,000 images, with 5,000 for training and 1,000 for testing. The images are 32x32 pixels in size and vary across the following categories:
|
125 |
+
|
126 |
+
- Airplanes
|
127 |
+
- Cars
|
128 |
+
- Birds
|
129 |
+
- Cats
|
130 |
+
- Deer
|
131 |
+
- Dogs
|
132 |
+
- Frogs
|
133 |
+
- Horses
|
134 |
+
- Ships
|
135 |
+
- Trucks
|
136 |
+
|
137 |
+
This diverse dataset is essential for training image classification models in fields such as machine learning and computer vision. For more information, visit the CIFAR-10 sections on [dataset structure](#dataset-structure) and [applications](#applications).
|
138 |
+
|
139 |
+
### Why use the CIFAR-10 dataset for image classification tasks?
|
140 |
+
|
141 |
+
The CIFAR-10 dataset is an excellent benchmark for image classification due to its diversity and structure. It contains a balanced mix of 60,000 labeled images across 10 different categories, which helps in training robust and generalized models. It is widely used for evaluating deep learning models, including Convolutional [Neural Networks](https://www.ultralytics.com/glossary/neural-network-nn) (CNNs) and other machine learning algorithms. The dataset is relatively small, making it suitable for quick experimentation and algorithm development. Explore its numerous applications in the [applications](#applications) section.
|
142 |
+
|
143 |
+
### How is the CIFAR-10 dataset structured?
|
144 |
+
|
145 |
+
The CIFAR-10 dataset is structured into two main subsets:
|
146 |
+
|
147 |
+
1. **Training Set**: Contains 50,000 images used for training machine learning models.
|
148 |
+
2. **Testing Set**: Consists of 10,000 images for testing and benchmarking the trained models.
|
149 |
+
|
150 |
+
Each subset comprises images categorized into 10 classes, with their annotations readily available for model training and evaluation. For more detailed information, refer to the [dataset structure](#dataset-structure) section.
|
151 |
+
|
152 |
+
### How can I cite the CIFAR-10 dataset in my research?
|
153 |
+
|
154 |
+
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
|
155 |
+
|
156 |
+
!!! quote ""
|
157 |
+
|
158 |
+
=== "BibTeX"
|
159 |
+
|
160 |
+
```bibtex
|
161 |
+
@TECHREPORT{Krizhevsky09learningmultiple,
|
162 |
+
author={Alex Krizhevsky},
|
163 |
+
title={Learning multiple layers of features from tiny images},
|
164 |
+
institution={},
|
165 |
+
year={2009}
|
166 |
+
}
|
167 |
+
```
|
168 |
+
|
169 |
+
Acknowledging the dataset's creators helps support continued research and development in the field. For more details, see the [citations and acknowledgments](#citations-and-acknowledgments) section.
|
170 |
+
|
171 |
+
### What are some practical examples of using the CIFAR-10 dataset?
|
172 |
+
|
173 |
+
The CIFAR-10 dataset is often used for training image classification models, such as Convolutional Neural Networks (CNNs) and [Support Vector Machines](https://www.ultralytics.com/glossary/support-vector-machine-svm) (SVMs). These models can be employed in various computer vision tasks including [object detection](https://www.ultralytics.com/glossary/object-detection), [image recognition](https://www.ultralytics.com/glossary/image-recognition), and automated tagging. To see some practical examples, check the code snippets in the [usage](#usage) section.
|
docs/en/datasets/classify/cifar100.md
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the CIFAR-100 dataset, consisting of 60,000 32x32 color images across 100 classes. Ideal for machine learning and computer vision tasks.
|
4 |
+
keywords: CIFAR-100, dataset, machine learning, computer vision, image classification, deep learning, YOLO, training, testing, Alex Krizhevsky
|
5 |
+
---
|
6 |
+
|
7 |
+
# CIFAR-100 Dataset
|
8 |
+
|
9 |
+
The [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) (Canadian Institute For Advanced Research) dataset is a significant extension of the CIFAR-10 dataset, composed of 60,000 32x32 color images in 100 different classes. It was developed by researchers at the CIFAR institute, offering a more challenging dataset for more complex machine learning and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) tasks.
|
10 |
+
|
11 |
+
## Key Features
|
12 |
+
|
13 |
+
- The CIFAR-100 dataset consists of 60,000 images, divided into 100 classes.
|
14 |
+
- Each class contains 600 images, split into 500 for training and 100 for testing.
|
15 |
+
- The images are colored and of size 32x32 pixels.
|
16 |
+
- The 100 different classes are grouped into 20 coarse categories for higher level classification.
|
17 |
+
- CIFAR-100 is commonly used for training and testing in the field of machine learning and computer vision.
|
18 |
+
|
19 |
+
## Dataset Structure
|
20 |
+
|
21 |
+
The CIFAR-100 dataset is split into two subsets:
|
22 |
+
|
23 |
+
1. **Training Set**: This subset contains 50,000 images used for training machine learning models.
|
24 |
+
2. **Testing Set**: This subset consists of 10,000 images used for testing and benchmarking the trained models.
|
25 |
+
|
26 |
+
## Applications
|
27 |
+
|
28 |
+
The CIFAR-100 dataset is extensively used for training and evaluating deep learning models in image classification tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), Support Vector Machines (SVMs), and various other machine learning algorithms. The diversity of the dataset in terms of classes and the presence of color images make it a more challenging and comprehensive dataset for research and development in the field of machine learning and computer vision.
|
29 |
+
|
30 |
+
## Usage
|
31 |
+
|
32 |
+
To train a YOLO model on the CIFAR-100 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
33 |
+
|
34 |
+
!!! example "Train Example"
|
35 |
+
|
36 |
+
=== "Python"
|
37 |
+
|
38 |
+
```python
|
39 |
+
from ultralytics import YOLO
|
40 |
+
|
41 |
+
# Load a model
|
42 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
43 |
+
|
44 |
+
# Train the model
|
45 |
+
results = model.train(data="cifar100", epochs=100, imgsz=32)
|
46 |
+
```
|
47 |
+
|
48 |
+
=== "CLI"
|
49 |
+
|
50 |
+
```bash
|
51 |
+
# Start training from a pretrained *.pt model
|
52 |
+
yolo classify train data=cifar100 model=yolo11n-cls.pt epochs=100 imgsz=32
|
53 |
+
```
|
54 |
+
|
55 |
+
## Sample Images and Annotations
|
56 |
+
|
57 |
+
The CIFAR-100 dataset contains color images of various objects, providing a well-structured dataset for [image classification](https://www.ultralytics.com/glossary/image-classification) tasks. Here are some examples of images from the dataset:
|
58 |
+
|
59 |
+

|
60 |
+
|
61 |
+
The example showcases the variety and complexity of the objects in the CIFAR-100 dataset, highlighting the importance of a diverse dataset for training robust image classification models.
|
62 |
+
|
63 |
+
## Citations and Acknowledgments
|
64 |
+
|
65 |
+
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
|
66 |
+
|
67 |
+
!!! quote ""
|
68 |
+
|
69 |
+
=== "BibTeX"
|
70 |
+
|
71 |
+
```bibtex
|
72 |
+
@TECHREPORT{Krizhevsky09learningmultiple,
|
73 |
+
author={Alex Krizhevsky},
|
74 |
+
title={Learning multiple layers of features from tiny images},
|
75 |
+
institution={},
|
76 |
+
year={2009}
|
77 |
+
}
|
78 |
+
```
|
79 |
+
|
80 |
+
We would like to acknowledge Alex Krizhevsky for creating and maintaining the CIFAR-100 dataset as a valuable resource for the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision research community. For more information about the CIFAR-100 dataset and its creator, visit the [CIFAR-100 dataset website](https://www.cs.toronto.edu/~kriz/cifar.html).
|
81 |
+
|
82 |
+
## FAQ
|
83 |
+
|
84 |
+
### What is the CIFAR-100 dataset and why is it significant?
|
85 |
+
|
86 |
+
The [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a large collection of 60,000 32x32 color images classified into 100 classes. Developed by the Canadian Institute For Advanced Research (CIFAR), it provides a challenging dataset ideal for complex machine learning and computer vision tasks. Its significance lies in the diversity of classes and the small size of the images, making it a valuable resource for training and testing deep learning models, like Convolutional [Neural Networks](https://www.ultralytics.com/glossary/neural-network-nn) (CNNs), using frameworks such as Ultralytics YOLO.
|
87 |
+
|
88 |
+
### How do I train a YOLO model on the CIFAR-100 dataset?
|
89 |
+
|
90 |
+
You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI commands. Here's how:
|
91 |
+
|
92 |
+
!!! example "Train Example"
|
93 |
+
|
94 |
+
=== "Python"
|
95 |
+
|
96 |
+
```python
|
97 |
+
from ultralytics import YOLO
|
98 |
+
|
99 |
+
# Load a model
|
100 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
101 |
+
|
102 |
+
# Train the model
|
103 |
+
results = model.train(data="cifar100", epochs=100, imgsz=32)
|
104 |
+
```
|
105 |
+
|
106 |
+
=== "CLI"
|
107 |
+
|
108 |
+
```bash
|
109 |
+
# Start training from a pretrained *.pt model
|
110 |
+
yolo classify train data=cifar100 model=yolo11n-cls.pt epochs=100 imgsz=32
|
111 |
+
```
|
112 |
+
|
113 |
+
For a comprehensive list of available arguments, please refer to the model [Training](../../modes/train.md) page.
|
114 |
+
|
115 |
+
### What are the primary applications of the CIFAR-100 dataset?
|
116 |
+
|
117 |
+
The CIFAR-100 dataset is extensively used in training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models for image classification. Its diverse set of 100 classes, grouped into 20 coarse categories, provides a challenging environment for testing algorithms such as Convolutional Neural Networks (CNNs), [Support Vector Machines](https://www.ultralytics.com/glossary/support-vector-machine-svm) (SVMs), and various other machine learning approaches. This dataset is a key resource in research and development within machine learning and computer vision fields.
|
118 |
+
|
119 |
+
### How is the CIFAR-100 dataset structured?
|
120 |
+
|
121 |
+
The CIFAR-100 dataset is split into two main subsets:
|
122 |
+
|
123 |
+
1. **Training Set**: Contains 50,000 images used for training machine learning models.
|
124 |
+
2. **Testing Set**: Consists of 10,000 images used for testing and benchmarking the trained models.
|
125 |
+
|
126 |
+
Each of the 100 classes contains 600 images, with 500 images for training and 100 for testing, making it uniquely suited for rigorous academic and industrial research.
|
127 |
+
|
128 |
+
### Where can I find sample images and annotations from the CIFAR-100 dataset?
|
129 |
+
|
130 |
+
The CIFAR-100 dataset includes a variety of color images of various objects, making it a structured dataset for image classification tasks. You can refer to the documentation page to see [sample images and annotations](#sample-images-and-annotations). These examples highlight the dataset's diversity and complexity, important for training robust image classification models.
|
docs/en/datasets/classify/fashion-mnist.md
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the Fashion-MNIST dataset, a modern replacement for MNIST with 70,000 Zalando article images. Ideal for benchmarking machine learning models.
|
4 |
+
keywords: Fashion-MNIST, image classification, Zalando dataset, machine learning, deep learning, CNN, dataset overview
|
5 |
+
---
|
6 |
+
|
7 |
+
# Fashion-MNIST Dataset
|
8 |
+
|
9 |
+
The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is a database of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) algorithms.
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<br>
|
13 |
+
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/eX5ad6udQ9Q"
|
14 |
+
title="YouTube video player" frameborder="0"
|
15 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
16 |
+
allowfullscreen>
|
17 |
+
</iframe>
|
18 |
+
<br>
|
19 |
+
<strong>Watch:</strong> How to do <a href="https://www.ultralytics.com/glossary/image-classification">Image Classification</a> on Fashion MNIST Dataset using Ultralytics YOLO11
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## Key Features
|
23 |
+
|
24 |
+
- Fashion-MNIST contains 60,000 training images and 10,000 testing images of Zalando's article images.
|
25 |
+
- The dataset comprises grayscale images of size 28x28 pixels.
|
26 |
+
- Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255.
|
27 |
+
- Fashion-MNIST is widely used for training and testing in the field of machine learning, especially for image classification tasks.
|
28 |
+
|
29 |
+
## Dataset Structure
|
30 |
+
|
31 |
+
The Fashion-MNIST dataset is split into two subsets:
|
32 |
+
|
33 |
+
1. **Training Set**: This subset contains 60,000 images used for training machine learning models.
|
34 |
+
2. **Testing Set**: This subset consists of 10,000 images used for testing and benchmarking the trained models.
|
35 |
+
|
36 |
+
## Labels
|
37 |
+
|
38 |
+
Each training and test example is assigned to one of the following labels:
|
39 |
+
|
40 |
+
0. T-shirt/top
|
41 |
+
1. Trouser
|
42 |
+
2. Pullover
|
43 |
+
3. Dress
|
44 |
+
4. Coat
|
45 |
+
5. Sandal
|
46 |
+
6. Shirt
|
47 |
+
7. Sneaker
|
48 |
+
8. Bag
|
49 |
+
9. Ankle boot
|
50 |
+
|
51 |
+
## Applications
|
52 |
+
|
53 |
+
The Fashion-MNIST dataset is widely used for training and evaluating deep learning models in image classification tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), [Support Vector Machines](https://www.ultralytics.com/glossary/support-vector-machine-svm) (SVMs), and various other machine learning algorithms. The dataset's simple and well-structured format makes it an essential resource for researchers and practitioners in the field of machine learning and computer vision.
|
54 |
+
|
55 |
+
## Usage
|
56 |
+
|
57 |
+
To train a CNN model on the Fashion-MNIST dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 28x28, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
58 |
+
|
59 |
+
!!! example "Train Example"
|
60 |
+
|
61 |
+
=== "Python"
|
62 |
+
|
63 |
+
```python
|
64 |
+
from ultralytics import YOLO
|
65 |
+
|
66 |
+
# Load a model
|
67 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
68 |
+
|
69 |
+
# Train the model
|
70 |
+
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
|
71 |
+
```
|
72 |
+
|
73 |
+
=== "CLI"
|
74 |
+
|
75 |
+
```bash
|
76 |
+
# Start training from a pretrained *.pt model
|
77 |
+
yolo classify train data=fashion-mnist model=yolo11n-cls.pt epochs=100 imgsz=28
|
78 |
+
```
|
79 |
+
|
80 |
+
## Sample Images and Annotations
|
81 |
+
|
82 |
+
The Fashion-MNIST dataset contains grayscale images of Zalando's article images, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
|
83 |
+
|
84 |
+

|
85 |
+
|
86 |
+
The example showcases the variety and complexity of the images in the Fashion-MNIST dataset, highlighting the importance of a diverse dataset for training robust image classification models.
|
87 |
+
|
88 |
+
## Acknowledgments
|
89 |
+
|
90 |
+
If you use the Fashion-MNIST dataset in your research or development work, please acknowledge the dataset by linking to the [GitHub repository](https://github.com/zalandoresearch/fashion-mnist). This dataset was made available by Zalando Research.
|
91 |
+
|
92 |
+
## FAQ
|
93 |
+
|
94 |
+
### What is the Fashion-MNIST dataset and how is it different from MNIST?
|
95 |
+
|
96 |
+
The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is a collection of 70,000 grayscale images of Zalando's article images, intended as a modern replacement for the original MNIST dataset. It serves as a benchmark for machine learning models in the context of image classification tasks. Unlike MNIST, which contains handwritten digits, Fashion-MNIST consists of 28x28-pixel images categorized into 10 fashion-related classes, such as T-shirt/top, trouser, and ankle boot.
|
97 |
+
|
98 |
+
### How can I train a YOLO model on the Fashion-MNIST dataset?
|
99 |
+
|
100 |
+
To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use both Python and CLI commands. Here's a quick example to get you started:
|
101 |
+
|
102 |
+
!!! example "Train Example"
|
103 |
+
|
104 |
+
=== "Python"
|
105 |
+
|
106 |
+
```python
|
107 |
+
from ultralytics import YOLO
|
108 |
+
|
109 |
+
# Load a pretrained model
|
110 |
+
model = YOLO("yolo11n-cls.pt")
|
111 |
+
|
112 |
+
# Train the model on Fashion-MNIST
|
113 |
+
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
|
114 |
+
```
|
115 |
+
|
116 |
+
|
117 |
+
=== "CLI"
|
118 |
+
|
119 |
+
```bash
|
120 |
+
yolo classify train data=fashion-mnist model=yolo11n-cls.pt epochs=100 imgsz=28
|
121 |
+
```
|
122 |
+
|
123 |
+
For more detailed training parameters, refer to the [Training page](../../modes/train.md).
|
124 |
+
|
125 |
+
### Why should I use the Fashion-MNIST dataset for benchmarking my machine learning models?
|
126 |
+
|
127 |
+
The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is widely recognized in the [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) community as a robust alternative to MNIST. It offers a more complex and varied set of images, making it an excellent choice for benchmarking image classification models. The dataset's structure, comprising 60,000 training images and 10,000 testing images, each labeled with one of 10 classes, makes it ideal for evaluating the performance of different machine learning algorithms in a more challenging context.
|
128 |
+
|
129 |
+
### Can I use Ultralytics YOLO for image classification tasks like Fashion-MNIST?
|
130 |
+
|
131 |
+
Yes, Ultralytics YOLO models can be used for image classification tasks, including those involving the Fashion-MNIST dataset. YOLO11, for example, supports various vision tasks such as detection, segmentation, and classification. To get started with image classification tasks, refer to the [Classification page](https://docs.ultralytics.com/tasks/classify/).
|
132 |
+
|
133 |
+
### What are the key features and structure of the Fashion-MNIST dataset?
|
134 |
+
|
135 |
+
The Fashion-MNIST dataset is divided into two main subsets: 60,000 training images and 10,000 testing images. Each image is a 28x28-pixel grayscale picture representing one of 10 fashion-related classes. The simplicity and well-structured format make it ideal for training and evaluating models in machine learning and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) tasks. For more details on the dataset structure, see the [Dataset Structure section](#dataset-structure).
|
136 |
+
|
137 |
+
### How can I acknowledge the use of the Fashion-MNIST dataset in my research?
|
138 |
+
|
139 |
+
If you utilize the Fashion-MNIST dataset in your research or development projects, it's important to acknowledge it by linking to the [GitHub repository](https://github.com/zalandoresearch/fashion-mnist). This helps in attributing the data to Zalando Research, who made the dataset available for public use.
|
docs/en/datasets/classify/imagenet.md
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the extensive ImageNet dataset and discover its role in advancing deep learning in computer vision. Access pretrained models and training examples.
|
4 |
+
keywords: ImageNet, deep learning, visual recognition, computer vision, pretrained models, YOLO, dataset, object detection, image classification
|
5 |
+
---
|
6 |
+
|
7 |
+
# ImageNet Dataset
|
8 |
+
|
9 |
+
[ImageNet](https://www.image-net.org/) is a large-scale database of annotated images designed for use in visual object recognition research. It contains over 14 million images, with each image annotated using WordNet synsets, making it one of the most extensive resources available for training [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) tasks.
|
10 |
+
|
11 |
+
## ImageNet Pretrained Models
|
12 |
+
|
13 |
+
{% include "macros/yolo-cls-perf.md" %}
|
14 |
+
|
15 |
+
## Key Features
|
16 |
+
|
17 |
+
- ImageNet contains over 14 million high-resolution images spanning thousands of object categories.
|
18 |
+
- The dataset is organized according to the WordNet hierarchy, with each synset representing a category.
|
19 |
+
- ImageNet is widely used for training and benchmarking in the field of computer vision, particularly for [image classification](https://www.ultralytics.com/glossary/image-classification) and [object detection](https://www.ultralytics.com/glossary/object-detection) tasks.
|
20 |
+
- The annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been instrumental in advancing computer vision research.
|
21 |
+
|
22 |
+
## Dataset Structure
|
23 |
+
|
24 |
+
The ImageNet dataset is organized using the WordNet hierarchy. Each node in the hierarchy represents a category, and each category is described by a synset (a collection of synonymous terms). The images in ImageNet are annotated with one or more synsets, providing a rich resource for training models to recognize various objects and their relationships.
|
25 |
+
|
26 |
+
## ImageNet Large Scale Visual Recognition Challenge (ILSVRC)
|
27 |
+
|
28 |
+
The annual [ImageNet Large Scale Visual Recognition Challenge (ILSVRC)](https://image-net.org/challenges/LSVRC/) has been an important event in the field of computer vision. It has provided a platform for researchers and developers to evaluate their algorithms and models on a large-scale dataset with standardized evaluation metrics. The ILSVRC has led to significant advancements in the development of deep learning models for image classification, object detection, and other computer vision tasks.
|
29 |
+
|
30 |
+
## Applications
|
31 |
+
|
32 |
+
The ImageNet dataset is widely used for training and evaluating deep learning models in various computer vision tasks, such as image classification, object detection, and object localization. Some popular deep learning architectures, such as AlexNet, VGG, and ResNet, were developed and benchmarked using the ImageNet dataset.
|
33 |
+
|
34 |
+
## Usage
|
35 |
+
|
36 |
+
To train a deep learning model on the ImageNet dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
37 |
+
|
38 |
+
!!! example "Train Example"
|
39 |
+
|
40 |
+
=== "Python"
|
41 |
+
|
42 |
+
```python
|
43 |
+
from ultralytics import YOLO
|
44 |
+
|
45 |
+
# Load a model
|
46 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
47 |
+
|
48 |
+
# Train the model
|
49 |
+
results = model.train(data="imagenet", epochs=100, imgsz=224)
|
50 |
+
```
|
51 |
+
|
52 |
+
=== "CLI"
|
53 |
+
|
54 |
+
```bash
|
55 |
+
# Start training from a pretrained *.pt model
|
56 |
+
yolo classify train data=imagenet model=yolo11n-cls.pt epochs=100 imgsz=224
|
57 |
+
```
|
58 |
+
|
59 |
+
## Sample Images and Annotations
|
60 |
+
|
61 |
+
The ImageNet dataset contains high-resolution images spanning thousands of object categories, providing a diverse and extensive dataset for training and evaluating computer vision models. Here are some examples of images from the dataset:
|
62 |
+
|
63 |
+

|
64 |
+
|
65 |
+
The example showcases the variety and complexity of the images in the ImageNet dataset, highlighting the importance of a diverse dataset for training robust computer vision models.
|
66 |
+
|
67 |
+
## Citations and Acknowledgments
|
68 |
+
|
69 |
+
If you use the ImageNet dataset in your research or development work, please cite the following paper:
|
70 |
+
|
71 |
+
!!! quote ""
|
72 |
+
|
73 |
+
=== "BibTeX"
|
74 |
+
|
75 |
+
```bibtex
|
76 |
+
@article{ILSVRC15,
|
77 |
+
author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
|
78 |
+
title={ImageNet Large Scale Visual Recognition Challenge},
|
79 |
+
year={2015},
|
80 |
+
journal={International Journal of Computer Vision (IJCV)},
|
81 |
+
volume={115},
|
82 |
+
number={3},
|
83 |
+
pages={211-252}
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
We would like to acknowledge the ImageNet team, led by Olga Russakovsky, Jia Deng, and Li Fei-Fei, for creating and maintaining the ImageNet dataset as a valuable resource for the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision research community. For more information about the ImageNet dataset and its creators, visit the [ImageNet website](https://www.image-net.org/).
|
88 |
+
|
89 |
+
## FAQ
|
90 |
+
|
91 |
+
### What is the ImageNet dataset and how is it used in computer vision?
|
92 |
+
|
93 |
+
The [ImageNet dataset](https://www.image-net.org/) is a large-scale database consisting of over 14 million high-resolution images categorized using WordNet synsets. It is extensively used in visual object recognition research, including image classification and object detection. The dataset's annotations and sheer volume provide a rich resource for training deep learning models. Notably, models like AlexNet, VGG, and ResNet have been trained and benchmarked using ImageNet, showcasing its role in advancing computer vision.
|
94 |
+
|
95 |
+
### How can I use a pretrained YOLO model for image classification on the ImageNet dataset?
|
96 |
+
|
97 |
+
To use a pretrained Ultralytics YOLO model for image classification on the ImageNet dataset, follow these steps:
|
98 |
+
|
99 |
+
!!! example "Train Example"
|
100 |
+
|
101 |
+
=== "Python"
|
102 |
+
|
103 |
+
```python
|
104 |
+
from ultralytics import YOLO
|
105 |
+
|
106 |
+
# Load a model
|
107 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
108 |
+
|
109 |
+
# Train the model
|
110 |
+
results = model.train(data="imagenet", epochs=100, imgsz=224)
|
111 |
+
```
|
112 |
+
|
113 |
+
=== "CLI"
|
114 |
+
|
115 |
+
```bash
|
116 |
+
# Start training from a pretrained *.pt model
|
117 |
+
yolo classify train data=imagenet model=yolo11n-cls.pt epochs=100 imgsz=224
|
118 |
+
```
|
119 |
+
|
120 |
+
For more in-depth training instruction, refer to our [Training page](../../modes/train.md).
|
121 |
+
|
122 |
+
### Why should I use the Ultralytics YOLO11 pretrained models for my ImageNet dataset projects?
|
123 |
+
|
124 |
+
Ultralytics YOLO11 pretrained models offer state-of-the-art performance in terms of speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) for various computer vision tasks. For example, the YOLO11n-cls model, with a top-1 accuracy of 69.0% and a top-5 accuracy of 88.3%, is optimized for real-time applications. Pretrained models reduce the computational resources required for training from scratch and accelerate development cycles. Learn more about the performance metrics of YOLO11 models in the [ImageNet Pretrained Models section](#imagenet-pretrained-models).
|
125 |
+
|
126 |
+
### How is the ImageNet dataset structured, and why is it important?
|
127 |
+
|
128 |
+
The ImageNet dataset is organized using the WordNet hierarchy, where each node in the hierarchy represents a category described by a synset (a collection of synonymous terms). This structure allows for detailed annotations, making it ideal for training models to recognize a wide variety of objects. The diversity and annotation richness of ImageNet make it a valuable dataset for developing robust and generalizable deep learning models. More about this organization can be found in the [Dataset Structure](#dataset-structure) section.
|
129 |
+
|
130 |
+
### What role does the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) play in computer vision?
|
131 |
+
|
132 |
+
The annual [ImageNet Large Scale Visual Recognition Challenge (ILSVRC)](https://image-net.org/challenges/LSVRC/) has been pivotal in driving advancements in computer vision by providing a competitive platform for evaluating algorithms on a large-scale, standardized dataset. It offers standardized evaluation metrics, fostering innovation and development in areas such as image classification, object detection, and [image segmentation](https://www.ultralytics.com/glossary/image-segmentation). The challenge has continuously pushed the boundaries of what is possible with deep learning and computer vision technologies.
|
docs/en/datasets/classify/imagenet10.md
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Discover ImageNet10 a compact version of ImageNet for rapid model testing and CI checks. Perfect for quick evaluations in computer vision tasks.
|
4 |
+
keywords: ImageNet10, ImageNet, Ultralytics, CI tests, sanity checks, training pipelines, computer vision, deep learning, dataset
|
5 |
+
---
|
6 |
+
|
7 |
+
# ImageNet10 Dataset
|
8 |
+
|
9 |
+
The [ImageNet10](https://github.com/ultralytics/assets/releases/download/v0.0.0/imagenet10.zip) dataset is a small-scale subset of the [ImageNet](https://www.image-net.org/) database, developed by [Ultralytics](https://www.ultralytics.com/) and designed for CI tests, sanity checks, and fast testing of training pipelines. This dataset is composed of the first image in the training set and the first image from the validation set of the first 10 classes in ImageNet. Although significantly smaller, it retains the structure and diversity of the original ImageNet dataset.
|
10 |
+
|
11 |
+
## Key Features
|
12 |
+
|
13 |
+
- ImageNet10 is a compact version of ImageNet, with 20 images representing the first 10 classes of the original dataset.
|
14 |
+
- The dataset is organized according to the WordNet hierarchy, mirroring the structure of the full ImageNet dataset.
|
15 |
+
- It is ideally suited for CI tests, sanity checks, and rapid testing of training pipelines in [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) tasks.
|
16 |
+
- Although not designed for model benchmarking, it can provide a quick indication of a model's basic functionality and correctness.
|
17 |
+
|
18 |
+
## Dataset Structure
|
19 |
+
|
20 |
+
The ImageNet10 dataset, like the original ImageNet, is organized using the WordNet hierarchy. Each of the 10 classes in ImageNet10 is described by a synset (a collection of synonymous terms). The images in ImageNet10 are annotated with one or more synsets, providing a compact resource for testing models to recognize various objects and their relationships.
|
21 |
+
|
22 |
+
## Applications
|
23 |
+
|
24 |
+
The ImageNet10 dataset is useful for quickly testing and debugging computer vision models and pipelines. Its small size allows for rapid iteration, making it ideal for continuous integration tests and sanity checks. It can also be used for fast preliminary testing of new models or changes to existing models before moving on to full-scale testing with the complete ImageNet dataset.
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
|
28 |
+
To test a deep learning model on the ImageNet10 dataset with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
29 |
+
|
30 |
+
!!! example "Test Example"
|
31 |
+
|
32 |
+
=== "Python"
|
33 |
+
|
34 |
+
```python
|
35 |
+
from ultralytics import YOLO
|
36 |
+
|
37 |
+
# Load a model
|
38 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
39 |
+
|
40 |
+
# Train the model
|
41 |
+
results = model.train(data="imagenet10", epochs=5, imgsz=224)
|
42 |
+
```
|
43 |
+
|
44 |
+
=== "CLI"
|
45 |
+
|
46 |
+
```bash
|
47 |
+
# Start training from a pretrained *.pt model
|
48 |
+
yolo classify train data=imagenet10 model=yolo11n-cls.pt epochs=5 imgsz=224
|
49 |
+
```
|
50 |
+
|
51 |
+
## Sample Images and Annotations
|
52 |
+
|
53 |
+
The ImageNet10 dataset contains a subset of images from the original ImageNet dataset. These images are chosen to represent the first 10 classes in the dataset, providing a diverse yet compact dataset for quick testing and evaluation.
|
54 |
+
|
55 |
+
 The example showcases the variety and complexity of the images in the ImageNet10 dataset, highlighting its usefulness for sanity checks and quick testing of computer vision models.
|
56 |
+
|
57 |
+
## Citations and Acknowledgments
|
58 |
+
|
59 |
+
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
|
60 |
+
|
61 |
+
!!! quote ""
|
62 |
+
|
63 |
+
=== "BibTeX"
|
64 |
+
|
65 |
+
```bibtex
|
66 |
+
@article{ILSVRC15,
|
67 |
+
author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
|
68 |
+
title={ImageNet Large Scale Visual Recognition Challenge},
|
69 |
+
year={2015},
|
70 |
+
journal={International Journal of Computer Vision (IJCV)},
|
71 |
+
volume={115},
|
72 |
+
number={3},
|
73 |
+
pages={211-252}
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
We would like to acknowledge the ImageNet team, led by Olga Russakovsky, Jia Deng, and Li Fei-Fei, for creating and maintaining the ImageNet dataset. The ImageNet10 dataset, while a compact subset, is a valuable resource for quick testing and debugging in the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision research community. For more information about the ImageNet dataset and its creators, visit the [ImageNet website](https://www.image-net.org/).
|
78 |
+
|
79 |
+
## FAQ
|
80 |
+
|
81 |
+
### What is the ImageNet10 dataset and how is it different from the full ImageNet dataset?
|
82 |
+
|
83 |
+
The [ImageNet10](https://github.com/ultralytics/assets/releases/download/v0.0.0/imagenet10.zip) dataset is a compact subset of the original [ImageNet](https://www.image-net.org/) database, created by Ultralytics for rapid CI tests, sanity checks, and training pipeline evaluations. ImageNet10 comprises only 20 images, representing the first image in the training and validation sets of the first 10 classes in ImageNet. Despite its small size, it maintains the structure and diversity of the full dataset, making it ideal for quick testing but not for benchmarking models.
|
84 |
+
|
85 |
+
### How can I use the ImageNet10 dataset to test my deep learning model?
|
86 |
+
|
87 |
+
To test your deep learning model on the ImageNet10 dataset with an image size of 224x224, use the following code snippets.
|
88 |
+
|
89 |
+
!!! example "Test Example"
|
90 |
+
|
91 |
+
=== "Python"
|
92 |
+
|
93 |
+
```python
|
94 |
+
from ultralytics import YOLO
|
95 |
+
|
96 |
+
# Load a model
|
97 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
98 |
+
|
99 |
+
# Train the model
|
100 |
+
results = model.train(data="imagenet10", epochs=5, imgsz=224)
|
101 |
+
```
|
102 |
+
|
103 |
+
=== "CLI"
|
104 |
+
|
105 |
+
```bash
|
106 |
+
# Start training from a pretrained *.pt model
|
107 |
+
yolo classify train data=imagenet10 model=yolo11n-cls.pt epochs=5 imgsz=224
|
108 |
+
```
|
109 |
+
|
110 |
+
Refer to the [Training](../../modes/train.md) page for a comprehensive list of available arguments.
|
111 |
+
|
112 |
+
### Why should I use the ImageNet10 dataset for CI tests and sanity checks?
|
113 |
+
|
114 |
+
The ImageNet10 dataset is designed specifically for CI tests, sanity checks, and quick evaluations in [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) pipelines. Its small size allows for rapid iteration and testing, making it perfect for continuous integration processes where speed is crucial. By maintaining the structural complexity and diversity of the original ImageNet dataset, ImageNet10 provides a reliable indication of a model's basic functionality and correctness without the overhead of processing a large dataset.
|
115 |
+
|
116 |
+
### What are the main features of the ImageNet10 dataset?
|
117 |
+
|
118 |
+
The ImageNet10 dataset has several key features:
|
119 |
+
|
120 |
+
- **Compact Size**: With only 20 images, it allows for rapid testing and debugging.
|
121 |
+
- **Structured Organization**: Follows the WordNet hierarchy, similar to the full ImageNet dataset.
|
122 |
+
- **CI and Sanity Checks**: Ideally suited for continuous integration tests and sanity checks.
|
123 |
+
- **Not for Benchmarking**: While useful for quick model evaluations, it is not designed for extensive benchmarking.
|
124 |
+
|
125 |
+
### Where can I download the ImageNet10 dataset?
|
126 |
+
|
127 |
+
You can download the ImageNet10 dataset from the [Ultralytics GitHub releases page](https://github.com/ultralytics/assets/releases/download/v0.0.0/imagenet10.zip). For more detailed information about its structure and applications, refer to the [ImageNet10 Dataset](imagenet10.md) page.
|
docs/en/datasets/classify/imagenette.md
ADDED
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the ImageNette dataset, a subset of ImageNet with 10 classes for efficient training and evaluation of image classification models. Ideal for ML and CV projects.
|
4 |
+
keywords: ImageNette dataset, ImageNet subset, image classification, machine learning, deep learning, YOLO, Convolutional Neural Networks, ML dataset, education, training
|
5 |
+
---
|
6 |
+
|
7 |
+
# ImageNette Dataset
|
8 |
+
|
9 |
+
The [ImageNette](https://github.com/fastai/imagenette) dataset is a subset of the larger [Imagenet](https://www.image-net.org/) dataset, but it only includes 10 easily distinguishable classes. It was created to provide a quicker, easier-to-use version of Imagenet for software development and education.
|
10 |
+
|
11 |
+
## Key Features
|
12 |
+
|
13 |
+
- ImageNette contains images from 10 different classes such as tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute.
|
14 |
+
- The dataset comprises colored images of varying dimensions.
|
15 |
+
- ImageNette is widely used for training and testing in the field of machine learning, especially for image classification tasks.
|
16 |
+
|
17 |
+
## Dataset Structure
|
18 |
+
|
19 |
+
The ImageNette dataset is split into two subsets:
|
20 |
+
|
21 |
+
1. **Training Set**: This subset contains several thousands of images used for training machine learning models. The exact number varies per class.
|
22 |
+
2. **Validation Set**: This subset consists of several hundreds of images used for validating and benchmarking the trained models. Again, the exact number varies per class.
|
23 |
+
|
24 |
+
## Applications
|
25 |
+
|
26 |
+
The ImageNette dataset is widely used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in image classification tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), and various other machine learning algorithms. The dataset's straightforward format and well-chosen classes make it a handy resource for both beginner and experienced practitioners in the field of machine learning and computer vision.
|
27 |
+
|
28 |
+
## Usage
|
29 |
+
|
30 |
+
To train a model on the ImageNette dataset for 100 epochs with a standard image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
31 |
+
|
32 |
+
!!! example "Train Example"
|
33 |
+
|
34 |
+
=== "Python"
|
35 |
+
|
36 |
+
```python
|
37 |
+
from ultralytics import YOLO
|
38 |
+
|
39 |
+
# Load a model
|
40 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
41 |
+
|
42 |
+
# Train the model
|
43 |
+
results = model.train(data="imagenette", epochs=100, imgsz=224)
|
44 |
+
```
|
45 |
+
|
46 |
+
=== "CLI"
|
47 |
+
|
48 |
+
```bash
|
49 |
+
# Start training from a pretrained *.pt model
|
50 |
+
yolo classify train data=imagenette model=yolo11n-cls.pt epochs=100 imgsz=224
|
51 |
+
```
|
52 |
+
|
53 |
+
## Sample Images and Annotations
|
54 |
+
|
55 |
+
The ImageNette dataset contains colored images of various objects and scenes, providing a diverse dataset for [image classification](https://www.ultralytics.com/glossary/image-classification) tasks. Here are some examples of images from the dataset:
|
56 |
+
|
57 |
+

|
58 |
+
|
59 |
+
The example showcases the variety and complexity of the images in the ImageNette dataset, highlighting the importance of a diverse dataset for training robust image classification models.
|
60 |
+
|
61 |
+
## ImageNette160 and ImageNette320
|
62 |
+
|
63 |
+
For faster prototyping and training, the ImageNette dataset is also available in two reduced sizes: ImageNette160 and ImageNette320. These datasets maintain the same classes and structure as the full ImageNette dataset, but the images are resized to a smaller dimension. As such, these versions of the dataset are particularly useful for preliminary model testing, or when computational resources are limited.
|
64 |
+
|
65 |
+
To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imagenette320' in the training command. The following code snippets illustrate this:
|
66 |
+
|
67 |
+
!!! example "Train Example with ImageNette160"
|
68 |
+
|
69 |
+
=== "Python"
|
70 |
+
|
71 |
+
```python
|
72 |
+
from ultralytics import YOLO
|
73 |
+
|
74 |
+
# Load a model
|
75 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
76 |
+
|
77 |
+
# Train the model with ImageNette160
|
78 |
+
results = model.train(data="imagenette160", epochs=100, imgsz=160)
|
79 |
+
```
|
80 |
+
|
81 |
+
=== "CLI"
|
82 |
+
|
83 |
+
```bash
|
84 |
+
# Start training from a pretrained *.pt model with ImageNette160
|
85 |
+
yolo classify train data=imagenette160 model=yolo11n-cls.pt epochs=100 imgsz=160
|
86 |
+
```
|
87 |
+
|
88 |
+
!!! example "Train Example with ImageNette320"
|
89 |
+
|
90 |
+
=== "Python"
|
91 |
+
|
92 |
+
```python
|
93 |
+
from ultralytics import YOLO
|
94 |
+
|
95 |
+
# Load a model
|
96 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
97 |
+
|
98 |
+
# Train the model with ImageNette320
|
99 |
+
results = model.train(data="imagenette320", epochs=100, imgsz=320)
|
100 |
+
```
|
101 |
+
|
102 |
+
=== "CLI"
|
103 |
+
|
104 |
+
```bash
|
105 |
+
# Start training from a pretrained *.pt model with ImageNette320
|
106 |
+
yolo classify train data=imagenette320 model=yolo11n-cls.pt epochs=100 imgsz=320
|
107 |
+
```
|
108 |
+
|
109 |
+
These smaller versions of the dataset allow for rapid iterations during the development process while still providing valuable and realistic image classification tasks.
|
110 |
+
|
111 |
+
## Citations and Acknowledgments
|
112 |
+
|
113 |
+
If you use the ImageNette dataset in your research or development work, please acknowledge it appropriately. For more information about the ImageNette dataset, visit the [ImageNette dataset GitHub page](https://github.com/fastai/imagenette).
|
114 |
+
|
115 |
+
## FAQ
|
116 |
+
|
117 |
+
### What is the ImageNette dataset?
|
118 |
+
|
119 |
+
The [ImageNette dataset](https://github.com/fastai/imagenette) is a simplified subset of the larger [ImageNet dataset](https://www.image-net.org/), featuring only 10 easily distinguishable classes such as tench, English springer, and French horn. It was created to offer a more manageable dataset for efficient training and evaluation of image classification models. This dataset is particularly useful for quick software development and educational purposes in [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision.
|
120 |
+
|
121 |
+
### How can I use the ImageNette dataset for training a YOLO model?
|
122 |
+
|
123 |
+
To train a YOLO model on the ImageNette dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch), you can use the following commands. Make sure to have the Ultralytics YOLO environment set up.
|
124 |
+
|
125 |
+
!!! example "Train Example"
|
126 |
+
|
127 |
+
=== "Python"
|
128 |
+
|
129 |
+
```python
|
130 |
+
from ultralytics import YOLO
|
131 |
+
|
132 |
+
# Load a model
|
133 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
134 |
+
|
135 |
+
# Train the model
|
136 |
+
results = model.train(data="imagenette", epochs=100, imgsz=224)
|
137 |
+
```
|
138 |
+
|
139 |
+
=== "CLI"
|
140 |
+
|
141 |
+
```bash
|
142 |
+
# Start training from a pretrained *.pt model
|
143 |
+
yolo classify train data=imagenette model=yolo11n-cls.pt epochs=100 imgsz=224
|
144 |
+
```
|
145 |
+
|
146 |
+
For more details, see the [Training](../../modes/train.md) documentation page.
|
147 |
+
|
148 |
+
### Why should I use ImageNette for image classification tasks?
|
149 |
+
|
150 |
+
The ImageNette dataset is advantageous for several reasons:
|
151 |
+
|
152 |
+
- **Quick and Simple**: It contains only 10 classes, making it less complex and time-consuming compared to larger datasets.
|
153 |
+
- **Educational Use**: Ideal for learning and teaching the basics of image classification since it requires less computational power and time.
|
154 |
+
- **Versatility**: Widely used to train and benchmark various machine learning models, especially in image classification.
|
155 |
+
|
156 |
+
For more details on model training and dataset management, explore the [Dataset Structure](#dataset-structure) section.
|
157 |
+
|
158 |
+
### Can the ImageNette dataset be used with different image sizes?
|
159 |
+
|
160 |
+
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
|
161 |
+
|
162 |
+
!!! example "Train Example with ImageNette160"
|
163 |
+
|
164 |
+
=== "Python"
|
165 |
+
|
166 |
+
```python
|
167 |
+
from ultralytics import YOLO
|
168 |
+
|
169 |
+
# Load a model
|
170 |
+
model = YOLO("yolo11n-cls.pt")
|
171 |
+
|
172 |
+
# Train the model with ImageNette160
|
173 |
+
results = model.train(data="imagenette160", epochs=100, imgsz=160)
|
174 |
+
```
|
175 |
+
|
176 |
+
=== "CLI"
|
177 |
+
|
178 |
+
```bash
|
179 |
+
# Start training from a pretrained *.pt model with ImageNette160
|
180 |
+
yolo detect train data=imagenette160 model=yolo11n-cls.pt epochs=100 imgsz=160
|
181 |
+
```
|
182 |
+
|
183 |
+
For more information, refer to [Training with ImageNette160 and ImageNette320](#imagenette160-and-imagenette320).
|
184 |
+
|
185 |
+
### What are some practical applications of the ImageNette dataset?
|
186 |
+
|
187 |
+
The ImageNette dataset is extensively used in:
|
188 |
+
|
189 |
+
- **Educational Settings**: To educate beginners in machine learning and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv).
|
190 |
+
- **Software Development**: For rapid prototyping and development of image classification models.
|
191 |
+
- **Deep Learning Research**: To evaluate and benchmark the performance of various deep learning models, especially Convolutional [Neural Networks](https://www.ultralytics.com/glossary/neural-network-nn) (CNNs).
|
192 |
+
|
193 |
+
Explore the [Applications](#applications) section for detailed use cases.
|
docs/en/datasets/classify/imagewoof.md
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the ImageWoof dataset, a challenging subset of ImageNet focusing on 10 dog breeds, designed to enhance image classification models. Learn more on Ultralytics Docs.
|
4 |
+
keywords: ImageWoof dataset, ImageNet subset, dog breeds, image classification, deep learning, machine learning, Ultralytics, training dataset, noisy labels
|
5 |
+
---
|
6 |
+
|
7 |
+
# ImageWoof Dataset
|
8 |
+
|
9 |
+
The [ImageWoof](https://github.com/fastai/imagenette) dataset is a subset of the ImageNet consisting of 10 classes that are challenging to classify, since they're all dog breeds. It was created as a more difficult task for [image classification](https://www.ultralytics.com/glossary/image-classification) algorithms to solve, aiming at encouraging development of more advanced models.
|
10 |
+
|
11 |
+
## Key Features
|
12 |
+
|
13 |
+
- ImageWoof contains images of 10 different dog breeds: Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu, English foxhound, Rhodesian ridgeback, Dingo, Golden retriever, and Old English sheepdog.
|
14 |
+
- The dataset provides images at various resolutions (full size, 320px, 160px), accommodating for different computational capabilities and research needs.
|
15 |
+
- It also includes a version with noisy labels, providing a more realistic scenario where labels might not always be reliable.
|
16 |
+
|
17 |
+
## Dataset Structure
|
18 |
+
|
19 |
+
The ImageWoof dataset structure is based on the dog breed classes, with each breed having its own directory of images.
|
20 |
+
|
21 |
+
## Applications
|
22 |
+
|
23 |
+
The ImageWoof dataset is widely used for training and evaluating deep learning models in image classification tasks, especially when it comes to more complex and similar classes. The dataset's challenge lies in the subtle differences between the dog breeds, pushing the limits of model's performance and generalization.
|
24 |
+
|
25 |
+
## Usage
|
26 |
+
|
27 |
+
To train a CNN model on the ImageWoof dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
28 |
+
|
29 |
+
!!! example "Train Example"
|
30 |
+
|
31 |
+
=== "Python"
|
32 |
+
|
33 |
+
```python
|
34 |
+
from ultralytics import YOLO
|
35 |
+
|
36 |
+
# Load a model
|
37 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
38 |
+
|
39 |
+
# Train the model
|
40 |
+
results = model.train(data="imagewoof", epochs=100, imgsz=224)
|
41 |
+
```
|
42 |
+
|
43 |
+
=== "CLI"
|
44 |
+
|
45 |
+
```bash
|
46 |
+
# Start training from a pretrained *.pt model
|
47 |
+
yolo classify train data=imagewoof model=yolo11n-cls.pt epochs=100 imgsz=224
|
48 |
+
```
|
49 |
+
|
50 |
+
## Dataset Variants
|
51 |
+
|
52 |
+
ImageWoof dataset comes in three different sizes to accommodate various research needs and computational capabilities:
|
53 |
+
|
54 |
+
1. **Full Size (imagewoof)**: This is the original version of the ImageWoof dataset. It contains full-sized images and is ideal for final training and performance benchmarking.
|
55 |
+
|
56 |
+
2. **Medium Size (imagewoof320)**: This version contains images resized to have a maximum edge length of 320 pixels. It's suitable for faster training without significantly sacrificing model performance.
|
57 |
+
|
58 |
+
3. **Small Size (imagewoof160)**: This version contains images resized to have a maximum edge length of 160 pixels. It's designed for rapid prototyping and experimentation where training speed is a priority.
|
59 |
+
|
60 |
+
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
|
61 |
+
|
62 |
+
!!! example
|
63 |
+
|
64 |
+
=== "Python"
|
65 |
+
|
66 |
+
```python
|
67 |
+
from ultralytics import YOLO
|
68 |
+
|
69 |
+
# Load a model
|
70 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
71 |
+
|
72 |
+
# For medium-sized dataset
|
73 |
+
model.train(data="imagewoof320", epochs=100, imgsz=224)
|
74 |
+
|
75 |
+
# For small-sized dataset
|
76 |
+
model.train(data="imagewoof160", epochs=100, imgsz=224)
|
77 |
+
```
|
78 |
+
|
79 |
+
=== "CLI"
|
80 |
+
|
81 |
+
```bash
|
82 |
+
# Load a pretrained model and train on the small-sized dataset
|
83 |
+
yolo classify train model=yolo11n-cls.pt data=imagewoof320 epochs=100 imgsz=224
|
84 |
+
```
|
85 |
+
|
86 |
+
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
|
87 |
+
|
88 |
+
## Sample Images and Annotations
|
89 |
+
|
90 |
+
The ImageWoof dataset contains colorful images of various dog breeds, providing a challenging dataset for image classification tasks. Here are some examples of images from the dataset:
|
91 |
+
|
92 |
+

|
93 |
+
|
94 |
+
The example showcases the subtle differences and similarities among the different dog breeds in the ImageWoof dataset, highlighting the complexity and difficulty of the classification task.
|
95 |
+
|
96 |
+
## Citations and Acknowledgments
|
97 |
+
|
98 |
+
If you use the ImageWoof dataset in your research or development work, please make sure to acknowledge the creators of the dataset by linking to the [official dataset repository](https://github.com/fastai/imagenette).
|
99 |
+
|
100 |
+
We would like to acknowledge the FastAI team for creating and maintaining the ImageWoof dataset as a valuable resource for the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) research community. For more information about the ImageWoof dataset, visit the [ImageWoof dataset repository](https://github.com/fastai/imagenette).
|
101 |
+
|
102 |
+
## FAQ
|
103 |
+
|
104 |
+
### What is the ImageWoof dataset in Ultralytics?
|
105 |
+
|
106 |
+
The [ImageWoof](https://github.com/fastai/imagenette) dataset is a challenging subset of ImageNet focusing on 10 specific dog breeds. Created to push the limits of image classification models, it features breeds like Beagle, Shih-Tzu, and Golden Retriever. The dataset includes images at various resolutions (full size, 320px, 160px) and even noisy labels for more realistic training scenarios. This complexity makes ImageWoof ideal for developing more advanced deep learning models.
|
107 |
+
|
108 |
+
### How can I train a model using the ImageWoof dataset with Ultralytics YOLO?
|
109 |
+
|
110 |
+
To train a [Convolutional Neural Network](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNN) model on the ImageWoof dataset using Ultralytics YOLO for 100 epochs at an image size of 224x224, you can use the following code:
|
111 |
+
|
112 |
+
!!! example "Train Example"
|
113 |
+
|
114 |
+
=== "Python"
|
115 |
+
|
116 |
+
```python
|
117 |
+
from ultralytics import YOLO
|
118 |
+
|
119 |
+
model = YOLO("yolo11n-cls.pt") # Load a pretrained model
|
120 |
+
results = model.train(data="imagewoof", epochs=100, imgsz=224)
|
121 |
+
```
|
122 |
+
|
123 |
+
|
124 |
+
=== "CLI"
|
125 |
+
|
126 |
+
```bash
|
127 |
+
yolo classify train data=imagewoof model=yolo11n-cls.pt epochs=100 imgsz=224
|
128 |
+
```
|
129 |
+
|
130 |
+
For more details on available training arguments, refer to the [Training](../../modes/train.md) page.
|
131 |
+
|
132 |
+
### What versions of the ImageWoof dataset are available?
|
133 |
+
|
134 |
+
The ImageWoof dataset comes in three sizes:
|
135 |
+
|
136 |
+
1. **Full Size (imagewoof)**: Ideal for final training and benchmarking, containing full-sized images.
|
137 |
+
2. **Medium Size (imagewoof320)**: Resized images with a maximum edge length of 320 pixels, suited for faster training.
|
138 |
+
3. **Small Size (imagewoof160)**: Resized images with a maximum edge length of 160 pixels, perfect for rapid prototyping.
|
139 |
+
|
140 |
+
Use these versions by replacing 'imagewoof' in the dataset argument accordingly. Note, however, that smaller images may yield lower classification [accuracy](https://www.ultralytics.com/glossary/accuracy) but can be useful for quicker iterations.
|
141 |
+
|
142 |
+
### How do noisy labels in the ImageWoof dataset benefit training?
|
143 |
+
|
144 |
+
Noisy labels in the ImageWoof dataset simulate real-world conditions where labels might not always be accurate. Training models with this data helps develop robustness and generalization in image classification tasks. This prepares the models to handle ambiguous or mislabeled data effectively, which is often encountered in practical applications.
|
145 |
+
|
146 |
+
### What are the key challenges of using the ImageWoof dataset?
|
147 |
+
|
148 |
+
The primary challenge of the ImageWoof dataset lies in the subtle differences among the dog breeds it includes. Since it focuses on 10 closely related breeds, distinguishing between them requires more advanced and fine-tuned image classification models. This makes ImageWoof an excellent benchmark to test the capabilities and improvements of [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models.
|
docs/en/datasets/classify/index.md
ADDED
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Learn how to structure datasets for YOLO classification tasks. Detailed folder structure and usage examples for effective training.
|
4 |
+
keywords: YOLO, image classification, dataset structure, CIFAR-10, Ultralytics, machine learning, training data, model evaluation
|
5 |
+
---
|
6 |
+
|
7 |
+
# Image Classification Datasets Overview
|
8 |
+
|
9 |
+
### Dataset Structure for YOLO Classification Tasks
|
10 |
+
|
11 |
+
For [Ultralytics](https://www.ultralytics.com/) YOLO classification tasks, the dataset must be organized in a specific split-directory structure under the `root` directory to facilitate proper training, testing, and optional validation processes. This structure includes separate directories for training (`train`) and testing (`test`) phases, with an optional directory for validation (`val`).
|
12 |
+
|
13 |
+
Each of these directories should contain one subdirectory for each class in the dataset. The subdirectories are named after the corresponding class and contain all the images for that class. Ensure that each image file is named uniquely and stored in a common format such as JPEG or PNG.
|
14 |
+
|
15 |
+
**Folder Structure Example**
|
16 |
+
|
17 |
+
Consider the CIFAR-10 dataset as an example. The folder structure should look like this:
|
18 |
+
|
19 |
+
```
|
20 |
+
cifar-10-/
|
21 |
+
|
|
22 |
+
|-- train/
|
23 |
+
| |-- airplane/
|
24 |
+
| | |-- 10008_airplane.png
|
25 |
+
| | |-- 10009_airplane.png
|
26 |
+
| | |-- ...
|
27 |
+
| |
|
28 |
+
| |-- automobile/
|
29 |
+
| | |-- 1000_automobile.png
|
30 |
+
| | |-- 1001_automobile.png
|
31 |
+
| | |-- ...
|
32 |
+
| |
|
33 |
+
| |-- bird/
|
34 |
+
| | |-- 10014_bird.png
|
35 |
+
| | |-- 10015_bird.png
|
36 |
+
| | |-- ...
|
37 |
+
| |
|
38 |
+
| |-- ...
|
39 |
+
|
|
40 |
+
|-- test/
|
41 |
+
| |-- airplane/
|
42 |
+
| | |-- 10_airplane.png
|
43 |
+
| | |-- 11_airplane.png
|
44 |
+
| | |-- ...
|
45 |
+
| |
|
46 |
+
| |-- automobile/
|
47 |
+
| | |-- 100_automobile.png
|
48 |
+
| | |-- 101_automobile.png
|
49 |
+
| | |-- ...
|
50 |
+
| |
|
51 |
+
| |-- bird/
|
52 |
+
| | |-- 1000_bird.png
|
53 |
+
| | |-- 1001_bird.png
|
54 |
+
| | |-- ...
|
55 |
+
| |
|
56 |
+
| |-- ...
|
57 |
+
|
|
58 |
+
|-- val/ (optional)
|
59 |
+
| |-- airplane/
|
60 |
+
| | |-- 105_airplane.png
|
61 |
+
| | |-- 106_airplane.png
|
62 |
+
| | |-- ...
|
63 |
+
| |
|
64 |
+
| |-- automobile/
|
65 |
+
| | |-- 102_automobile.png
|
66 |
+
| | |-- 103_automobile.png
|
67 |
+
| | |-- ...
|
68 |
+
| |
|
69 |
+
| |-- bird/
|
70 |
+
| | |-- 1045_bird.png
|
71 |
+
| | |-- 1046_bird.png
|
72 |
+
| | |-- ...
|
73 |
+
| |
|
74 |
+
| |-- ...
|
75 |
+
```
|
76 |
+
|
77 |
+
This structured approach ensures that the model can effectively learn from well-organized classes during the training phase and accurately evaluate performance during testing and validation phases.
|
78 |
+
|
79 |
+
## Usage
|
80 |
+
|
81 |
+
!!! example
|
82 |
+
|
83 |
+
=== "Python"
|
84 |
+
|
85 |
+
```python
|
86 |
+
from ultralytics import YOLO
|
87 |
+
|
88 |
+
# Load a model
|
89 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
90 |
+
|
91 |
+
# Train the model
|
92 |
+
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
|
93 |
+
```
|
94 |
+
|
95 |
+
=== "CLI"
|
96 |
+
|
97 |
+
```bash
|
98 |
+
# Start training from a pretrained *.pt model
|
99 |
+
yolo detect train data=path/to/data model=yolo11n-cls.pt epochs=100 imgsz=640
|
100 |
+
```
|
101 |
+
|
102 |
+
## Supported Datasets
|
103 |
+
|
104 |
+
Ultralytics supports the following datasets with automatic download:
|
105 |
+
|
106 |
+
- [Caltech 101](caltech101.md): A dataset containing images of 101 object categories for [image classification](https://www.ultralytics.com/glossary/image-classification) tasks.
|
107 |
+
- [Caltech 256](caltech256.md): An extended version of Caltech 101 with 256 object categories and more challenging images.
|
108 |
+
- [CIFAR-10](cifar10.md): A dataset of 60K 32x32 color images in 10 classes, with 6K images per class.
|
109 |
+
- [CIFAR-100](cifar100.md): An extended version of CIFAR-10 with 100 object categories and 600 images per class.
|
110 |
+
- [Fashion-MNIST](fashion-mnist.md): A dataset consisting of 70,000 grayscale images of 10 fashion categories for image classification tasks.
|
111 |
+
- [ImageNet](imagenet.md): A large-scale dataset for [object detection](https://www.ultralytics.com/glossary/object-detection) and image classification with over 14 million images and 20,000 categories.
|
112 |
+
- [ImageNet-10](imagenet10.md): A smaller subset of ImageNet with 10 categories for faster experimentation and testing.
|
113 |
+
- [Imagenette](imagenette.md): A smaller subset of ImageNet that contains 10 easily distinguishable classes for quicker training and testing.
|
114 |
+
- [Imagewoof](imagewoof.md): A more challenging subset of ImageNet containing 10 dog breed categories for image classification tasks.
|
115 |
+
- [MNIST](mnist.md): A dataset of 70,000 grayscale images of handwritten digits for image classification tasks.
|
116 |
+
- [MNIST160](mnist.md): First 8 images of each MNIST category from the MNIST dataset. Dataset contains 160 images total.
|
117 |
+
|
118 |
+
### Adding your own dataset
|
119 |
+
|
120 |
+
If you have your own dataset and would like to use it for training classification models with Ultralytics, ensure that it follows the format specified above under "Dataset format" and then point your `data` argument to the dataset directory.
|
121 |
+
|
122 |
+
## FAQ
|
123 |
+
|
124 |
+
### How do I structure my dataset for YOLO classification tasks?
|
125 |
+
|
126 |
+
To structure your dataset for Ultralytics YOLO classification tasks, you should follow a specific split-directory format. Organize your dataset into separate directories for `train`, `test`, and optionally `val`. Each of these directories should contain subdirectories named after each class, with the corresponding images inside. This facilitates smooth training and evaluation processes. For an example, consider the CIFAR-10 dataset format:
|
127 |
+
|
128 |
+
```
|
129 |
+
cifar-10-/
|
130 |
+
|-- train/
|
131 |
+
| |-- airplane/
|
132 |
+
| |-- automobile/
|
133 |
+
| |-- bird/
|
134 |
+
| ...
|
135 |
+
|-- test/
|
136 |
+
| |-- airplane/
|
137 |
+
| |-- automobile/
|
138 |
+
| |-- bird/
|
139 |
+
| ...
|
140 |
+
|-- val/ (optional)
|
141 |
+
| |-- airplane/
|
142 |
+
| |-- automobile/
|
143 |
+
| |-- bird/
|
144 |
+
| ...
|
145 |
+
```
|
146 |
+
|
147 |
+
For more details, visit [Dataset Structure for YOLO Classification Tasks](#dataset-structure-for-yolo-classification-tasks).
|
148 |
+
|
149 |
+
### What datasets are supported by Ultralytics YOLO for image classification?
|
150 |
+
|
151 |
+
Ultralytics YOLO supports automatic downloading of several datasets for image classification, including:
|
152 |
+
|
153 |
+
- [Caltech 101](caltech101.md)
|
154 |
+
- [Caltech 256](caltech256.md)
|
155 |
+
- [CIFAR-10](cifar10.md)
|
156 |
+
- [CIFAR-100](cifar100.md)
|
157 |
+
- [Fashion-MNIST](fashion-mnist.md)
|
158 |
+
- [ImageNet](imagenet.md)
|
159 |
+
- [ImageNet-10](imagenet10.md)
|
160 |
+
- [Imagenette](imagenette.md)
|
161 |
+
- [Imagewoof](imagewoof.md)
|
162 |
+
- [MNIST](mnist.md)
|
163 |
+
|
164 |
+
These datasets are structured in a way that makes them easy to use with YOLO. Each dataset's page provides further details about its structure and applications.
|
165 |
+
|
166 |
+
### How do I add my own dataset for YOLO image classification?
|
167 |
+
|
168 |
+
To use your own dataset with Ultralytics YOLO, ensure it follows the specified directory format required for the classification task, with separate `train`, `test`, and optionally `val` directories, and subdirectories for each class containing the respective images. Once your dataset is structured correctly, point the `data` argument to your dataset's root directory when initializing the training script. Here's an example in Python:
|
169 |
+
|
170 |
+
```python
|
171 |
+
from ultralytics import YOLO
|
172 |
+
|
173 |
+
# Load a model
|
174 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
175 |
+
|
176 |
+
# Train the model
|
177 |
+
results = model.train(data="path/to/your/dataset", epochs=100, imgsz=640)
|
178 |
+
```
|
179 |
+
|
180 |
+
More details can be found in the [Adding your own dataset](#adding-your-own-dataset) section.
|
181 |
+
|
182 |
+
### Why should I use Ultralytics YOLO for image classification?
|
183 |
+
|
184 |
+
Ultralytics YOLO offers several benefits for image classification, including:
|
185 |
+
|
186 |
+
- **Pretrained Models**: Load pretrained models like `yolo11n-cls.pt` to jump-start your training process.
|
187 |
+
- **Ease of Use**: Simple API and CLI commands for training and evaluation.
|
188 |
+
- **High Performance**: State-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed, ideal for real-time applications.
|
189 |
+
- **Support for Multiple Datasets**: Seamless integration with various popular datasets like CIFAR-10, ImageNet, and more.
|
190 |
+
- **Community and Support**: Access to extensive documentation and an active community for troubleshooting and improvements.
|
191 |
+
|
192 |
+
For additional insights and real-world applications, you can explore [Ultralytics YOLO](https://www.ultralytics.com/yolo).
|
193 |
+
|
194 |
+
### How can I train a model using Ultralytics YOLO?
|
195 |
+
|
196 |
+
Training a model using Ultralytics YOLO can be done easily in both Python and CLI. Here's an example:
|
197 |
+
|
198 |
+
!!! example
|
199 |
+
|
200 |
+
=== "Python"
|
201 |
+
|
202 |
+
```python
|
203 |
+
from ultralytics import YOLO
|
204 |
+
|
205 |
+
# Load a model
|
206 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model
|
207 |
+
|
208 |
+
# Train the model
|
209 |
+
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
|
210 |
+
```
|
211 |
+
|
212 |
+
|
213 |
+
=== "CLI"
|
214 |
+
|
215 |
+
```bash
|
216 |
+
# Start training from a pretrained *.pt model
|
217 |
+
yolo detect train data=path/to/data model=yolo11n-cls.pt epochs=100 imgsz=640
|
218 |
+
```
|
219 |
+
|
220 |
+
These examples demonstrate the straightforward process of training a YOLO model using either approach. For more information, visit the [Usage](#usage) section.
|
docs/en/datasets/classify/mnist.md
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the MNIST dataset, a cornerstone in machine learning for handwritten digit recognition. Learn about its structure, features, and applications.
|
4 |
+
keywords: MNIST, dataset, handwritten digits, image classification, deep learning, machine learning, training set, testing set, NIST
|
5 |
+
---
|
6 |
+
|
7 |
+
# MNIST Dataset
|
8 |
+
|
9 |
+
The [MNIST](http://yann.lecun.com/exdb/mnist/) (Modified National Institute of Standards and Technology) dataset is a large database of handwritten digits that is commonly used for training various image processing systems and machine learning models. It was created by "re-mixing" the samples from NIST's original datasets and has become a benchmark for evaluating the performance of image classification algorithms.
|
10 |
+
|
11 |
+
## Key Features
|
12 |
+
|
13 |
+
- MNIST contains 60,000 training images and 10,000 testing images of handwritten digits.
|
14 |
+
- The dataset comprises grayscale images of size 28x28 pixels.
|
15 |
+
- The images are normalized to fit into a 28x28 pixel [bounding box](https://www.ultralytics.com/glossary/bounding-box) and anti-aliased, introducing grayscale levels.
|
16 |
+
- MNIST is widely used for training and testing in the field of machine learning, especially for image classification tasks.
|
17 |
+
|
18 |
+
## Dataset Structure
|
19 |
+
|
20 |
+
The MNIST dataset is split into two subsets:
|
21 |
+
|
22 |
+
1. **Training Set**: This subset contains 60,000 images of handwritten digits used for training machine learning models.
|
23 |
+
2. **Testing Set**: This subset consists of 10,000 images used for testing and benchmarking the trained models.
|
24 |
+
|
25 |
+
## Extended MNIST (EMNIST)
|
26 |
+
|
27 |
+
Extended MNIST (EMNIST) is a newer dataset developed and released by NIST to be the successor to MNIST. While MNIST included images only of handwritten digits, EMNIST includes all the images from NIST Special Database 19, which is a large database of handwritten uppercase and lowercase letters as well as digits. The images in EMNIST were converted into the same 28x28 pixel format, by the same process, as were the MNIST images. Accordingly, tools that work with the older, smaller MNIST dataset will likely work unmodified with EMNIST.
|
28 |
+
|
29 |
+
## Applications
|
30 |
+
|
31 |
+
The MNIST dataset is widely used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in image classification tasks, such as [Convolutional Neural Networks](https://www.ultralytics.com/glossary/convolutional-neural-network-cnn) (CNNs), [Support Vector Machines](https://www.ultralytics.com/glossary/support-vector-machine-svm) (SVMs), and various other machine learning algorithms. The dataset's simple and well-structured format makes it an essential resource for researchers and practitioners in the field of machine learning and computer vision.
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
To train a CNN model on the MNIST dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
36 |
+
|
37 |
+
!!! example "Train Example"
|
38 |
+
|
39 |
+
=== "Python"
|
40 |
+
|
41 |
+
```python
|
42 |
+
from ultralytics import YOLO
|
43 |
+
|
44 |
+
# Load a model
|
45 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
46 |
+
|
47 |
+
# Train the model
|
48 |
+
results = model.train(data="mnist", epochs=100, imgsz=32)
|
49 |
+
```
|
50 |
+
|
51 |
+
=== "CLI"
|
52 |
+
|
53 |
+
```bash
|
54 |
+
# Start training from a pretrained *.pt model
|
55 |
+
yolo classify train data=mnist model=yolo11n-cls.pt epochs=100 imgsz=28
|
56 |
+
```
|
57 |
+
|
58 |
+
## Sample Images and Annotations
|
59 |
+
|
60 |
+
The MNIST dataset contains grayscale images of handwritten digits, providing a well-structured dataset for [image classification](https://www.ultralytics.com/glossary/image-classification) tasks. Here are some examples of images from the dataset:
|
61 |
+
|
62 |
+

|
63 |
+
|
64 |
+
The example showcases the variety and complexity of the handwritten digits in the MNIST dataset, highlighting the importance of a diverse dataset for training robust image classification models.
|
65 |
+
|
66 |
+
## Citations and Acknowledgments
|
67 |
+
|
68 |
+
If you use the MNIST dataset in your
|
69 |
+
|
70 |
+
research or development work, please cite the following paper:
|
71 |
+
|
72 |
+
!!! quote ""
|
73 |
+
|
74 |
+
=== "BibTeX"
|
75 |
+
|
76 |
+
```bibtex
|
77 |
+
@article{lecun2010mnist,
|
78 |
+
title={MNIST handwritten digit database},
|
79 |
+
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
|
80 |
+
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
|
81 |
+
volume={2},
|
82 |
+
year={2010}
|
83 |
+
}
|
84 |
+
```
|
85 |
+
|
86 |
+
We would like to acknowledge Yann LeCun, Corinna Cortes, and Christopher J.C. Burges for creating and maintaining the MNIST dataset as a valuable resource for the [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) research community. For more information about the MNIST dataset and its creators, visit the [MNIST dataset website](http://yann.lecun.com/exdb/mnist/).
|
87 |
+
|
88 |
+
## FAQ
|
89 |
+
|
90 |
+
### What is the MNIST dataset, and why is it important in machine learning?
|
91 |
+
|
92 |
+
The [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, or Modified National Institute of Standards and Technology dataset, is a widely-used collection of handwritten digits designed for training and testing image classification systems. It includes 60,000 training images and 10,000 testing images, all of which are grayscale and 28x28 pixels in size. The dataset's importance lies in its role as a standard benchmark for evaluating image classification algorithms, helping researchers and engineers to compare methods and track progress in the field.
|
93 |
+
|
94 |
+
### How can I use Ultralytics YOLO to train a model on the MNIST dataset?
|
95 |
+
|
96 |
+
To train a model on the MNIST dataset using Ultralytics YOLO, you can follow these steps:
|
97 |
+
|
98 |
+
!!! example "Train Example"
|
99 |
+
|
100 |
+
=== "Python"
|
101 |
+
|
102 |
+
```python
|
103 |
+
from ultralytics import YOLO
|
104 |
+
|
105 |
+
# Load a model
|
106 |
+
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
|
107 |
+
|
108 |
+
# Train the model
|
109 |
+
results = model.train(data="mnist", epochs=100, imgsz=32)
|
110 |
+
```
|
111 |
+
|
112 |
+
=== "CLI"
|
113 |
+
|
114 |
+
```bash
|
115 |
+
# Start training from a pretrained *.pt model
|
116 |
+
yolo classify train data=mnist model=yolo11n-cls.pt epochs=100 imgsz=28
|
117 |
+
```
|
118 |
+
|
119 |
+
For a detailed list of available training arguments, refer to the [Training](../../modes/train.md) page.
|
120 |
+
|
121 |
+
### What is the difference between the MNIST and EMNIST datasets?
|
122 |
+
|
123 |
+
The MNIST dataset contains only handwritten digits, whereas the Extended MNIST (EMNIST) dataset includes both digits and uppercase and lowercase letters. EMNIST was developed as a successor to MNIST and utilizes the same 28x28 pixel format for the images, making it compatible with tools and models designed for the original MNIST dataset. This broader range of characters in EMNIST makes it useful for a wider variety of machine learning applications.
|
124 |
+
|
125 |
+
### Can I use Ultralytics HUB to train models on custom datasets like MNIST?
|
126 |
+
|
127 |
+
Yes, you can use Ultralytics HUB to train models on custom datasets like MNIST. Ultralytics HUB offers a user-friendly interface for uploading datasets, training models, and managing projects without needing extensive coding knowledge. For more details on how to get started, check out the [Ultralytics HUB Quickstart](https://docs.ultralytics.com/hub/quickstart/) page.
|
docs/en/datasets/detect/african-wildlife.md
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore our African Wildlife Dataset featuring images of buffalo, elephant, rhino, and zebra for training computer vision models. Ideal for research and conservation.
|
4 |
+
keywords: African Wildlife Dataset, South African animals, object detection, computer vision, YOLO11, wildlife research, conservation, dataset
|
5 |
+
---
|
6 |
+
|
7 |
+
# African Wildlife Dataset
|
8 |
+
|
9 |
+
This dataset showcases four common animal classes typically found in South African nature reserves. It includes images of African wildlife such as buffalo, elephant, rhino, and zebra, providing valuable insights into their characteristics. Essential for training [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) algorithms, this dataset aids in identifying animals in various habitats, from zoos to forests, and supports wildlife research.
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<br>
|
13 |
+
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/biIW5Z6GYl0"
|
14 |
+
title="YouTube video player" frameborder="0"
|
15 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
16 |
+
allowfullscreen>
|
17 |
+
</iframe>
|
18 |
+
<br>
|
19 |
+
<strong>Watch:</strong> African Wildlife Animals Detection using Ultralytics YOLO11
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## Dataset Structure
|
23 |
+
|
24 |
+
The African wildlife objects detection dataset is split into three subsets:
|
25 |
+
|
26 |
+
- **Training set**: Contains 1052 images, each with corresponding annotations.
|
27 |
+
- **Validation set**: Includes 225 images, each with paired annotations.
|
28 |
+
- **Testing set**: Comprises 227 images, each with paired annotations.
|
29 |
+
|
30 |
+
## Applications
|
31 |
+
|
32 |
+
This dataset can be applied in various computer vision tasks such as [object detection](https://www.ultralytics.com/glossary/object-detection), object tracking, and research. Specifically, it can be used to train and evaluate models for identifying African wildlife objects in images, which can have applications in wildlife conservation, ecological research, and monitoring efforts in natural reserves and protected areas. Additionally, it can serve as a valuable resource for educational purposes, enabling students and researchers to study and understand the characteristics and behaviors of different animal species.
|
33 |
+
|
34 |
+
## Dataset YAML
|
35 |
+
|
36 |
+
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths, classes, and other pertinent details. For the African wildlife dataset, the `african-wildlife.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml).
|
37 |
+
|
38 |
+
!!! example "ultralytics/cfg/datasets/african-wildlife.yaml"
|
39 |
+
|
40 |
+
```yaml
|
41 |
+
--8<-- "ultralytics/cfg/datasets/african-wildlife.yaml"
|
42 |
+
```
|
43 |
+
|
44 |
+
## Usage
|
45 |
+
|
46 |
+
To train a YOLO11n model on the African wildlife dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
|
47 |
+
|
48 |
+
!!! example "Train Example"
|
49 |
+
|
50 |
+
=== "Python"
|
51 |
+
|
52 |
+
```python
|
53 |
+
from ultralytics import YOLO
|
54 |
+
|
55 |
+
# Load a model
|
56 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
57 |
+
|
58 |
+
# Train the model
|
59 |
+
results = model.train(data="african-wildlife.yaml", epochs=100, imgsz=640)
|
60 |
+
```
|
61 |
+
|
62 |
+
=== "CLI"
|
63 |
+
|
64 |
+
```bash
|
65 |
+
# Start training from a pretrained *.pt model
|
66 |
+
yolo detect train data=african-wildlife.yaml model=yolo11n.pt epochs=100 imgsz=640
|
67 |
+
```
|
68 |
+
|
69 |
+
!!! example "Inference Example"
|
70 |
+
|
71 |
+
=== "Python"
|
72 |
+
|
73 |
+
```python
|
74 |
+
from ultralytics import YOLO
|
75 |
+
|
76 |
+
# Load a model
|
77 |
+
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
|
78 |
+
|
79 |
+
# Inference using the model
|
80 |
+
results = model.predict("https://ultralytics.com/assets/african-wildlife-sample.jpg")
|
81 |
+
```
|
82 |
+
|
83 |
+
=== "CLI"
|
84 |
+
|
85 |
+
```bash
|
86 |
+
# Start prediction with a finetuned *.pt model
|
87 |
+
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/african-wildlife-sample.jpg"
|
88 |
+
```
|
89 |
+
|
90 |
+
## Sample Images and Annotations
|
91 |
+
|
92 |
+
The African wildlife dataset comprises a wide variety of images showcasing diverse animal species and their natural habitats. Below are examples of images from the dataset, each accompanied by its corresponding annotations.
|
93 |
+
|
94 |
+

|
95 |
+
|
96 |
+
- **Mosaiced Image**: Here, we present a training batch consisting of mosaiced dataset images. Mosaicing, a training technique, combines multiple images into one, enriching batch diversity. This method helps enhance the model's ability to generalize across different object sizes, aspect ratios, and contexts.
|
97 |
+
|
98 |
+
This example illustrates the variety and complexity of images in the African wildlife dataset, emphasizing the benefits of including mosaicing during the training process.
|
99 |
+
|
100 |
+
## Citations and Acknowledgments
|
101 |
+
|
102 |
+
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
103 |
+
|
104 |
+
## FAQ
|
105 |
+
|
106 |
+
### What is the African Wildlife Dataset, and how can it be used in computer vision projects?
|
107 |
+
|
108 |
+
The African Wildlife Dataset includes images of four common animal species found in South African nature reserves: buffalo, elephant, rhino, and zebra. It is a valuable resource for training computer vision algorithms in object detection and animal identification. The dataset supports various tasks like object tracking, research, and conservation efforts. For more information on its structure and applications, refer to the [Dataset Structure](#dataset-structure) section and [Applications](#applications) of the dataset.
|
109 |
+
|
110 |
+
### How do I train a YOLO11 model using the African Wildlife Dataset?
|
111 |
+
|
112 |
+
You can train a YOLO11 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLO11n model for 100 epochs with an image size of 640:
|
113 |
+
|
114 |
+
!!! example
|
115 |
+
|
116 |
+
=== "Python"
|
117 |
+
|
118 |
+
```python
|
119 |
+
from ultralytics import YOLO
|
120 |
+
|
121 |
+
# Load a model
|
122 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
123 |
+
|
124 |
+
# Train the model
|
125 |
+
results = model.train(data="african-wildlife.yaml", epochs=100, imgsz=640)
|
126 |
+
```
|
127 |
+
|
128 |
+
=== "CLI"
|
129 |
+
|
130 |
+
```bash
|
131 |
+
# Start training from a pretrained *.pt model
|
132 |
+
yolo detect train data=african-wildlife.yaml model=yolo11n.pt epochs=100 imgsz=640
|
133 |
+
```
|
134 |
+
|
135 |
+
For additional training parameters and options, refer to the [Training](../../modes/train.md) documentation.
|
136 |
+
|
137 |
+
### Where can I find the YAML configuration file for the African Wildlife Dataset?
|
138 |
+
|
139 |
+
The YAML configuration file for the African Wildlife Dataset, named `african-wildlife.yaml`, can be found at [this GitHub link](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml). This file defines the dataset configuration, including paths, classes, and other details crucial for training [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) models. See the [Dataset YAML](#dataset-yaml) section for more details.
|
140 |
+
|
141 |
+
### Can I see sample images and annotations from the African Wildlife Dataset?
|
142 |
+
|
143 |
+
Yes, the African Wildlife Dataset includes a wide variety of images showcasing diverse animal species in their natural habitats. You can view sample images and their corresponding annotations in the [Sample Images and Annotations](#sample-images-and-annotations) section. This section also illustrates the use of mosaicing technique to combine multiple images into one for enriched batch diversity, enhancing the model's generalization ability.
|
144 |
+
|
145 |
+
### How can the African Wildlife Dataset be used to support wildlife conservation and research?
|
146 |
+
|
147 |
+
The African Wildlife Dataset is ideal for supporting wildlife conservation and research by enabling the training and evaluation of models to identify African wildlife in different habitats. These models can assist in monitoring animal populations, studying their behavior, and recognizing conservation needs. Additionally, the dataset can be utilized for educational purposes, helping students and researchers understand the characteristics and behaviors of different animal species. More details can be found in the [Applications](#applications) section.
|
docs/en/datasets/detect/argoverse.md
ADDED
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the comprehensive Argoverse dataset by Argo AI for 3D tracking, motion forecasting, and stereo depth estimation in autonomous driving research.
|
4 |
+
keywords: Argoverse dataset, autonomous driving, 3D tracking, motion forecasting, stereo depth estimation, Argo AI, LiDAR point clouds, high-resolution images, HD maps
|
5 |
+
---
|
6 |
+
|
7 |
+
# Argoverse Dataset
|
8 |
+
|
9 |
+
The [Argoverse](https://www.argoverse.org/) dataset is a collection of data designed to support research in autonomous driving tasks, such as 3D tracking, motion forecasting, and stereo depth estimation. Developed by Argo AI, the dataset provides a wide range of high-quality sensor data, including high-resolution images, LiDAR point clouds, and map data.
|
10 |
+
|
11 |
+
!!! note
|
12 |
+
|
13 |
+
The Argoverse dataset `*.zip` file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
|
14 |
+
|
15 |
+
## Key Features
|
16 |
+
|
17 |
+
- Argoverse contains over 290K labeled 3D object tracks and 5 million object instances across 1,263 distinct scenes.
|
18 |
+
- The dataset includes high-resolution camera images, LiDAR point clouds, and richly annotated HD maps.
|
19 |
+
- Annotations include 3D bounding boxes for objects, object tracks, and trajectory information.
|
20 |
+
- Argoverse provides multiple subsets for different tasks, such as 3D tracking, motion forecasting, and stereo depth estimation.
|
21 |
+
|
22 |
+
## Dataset Structure
|
23 |
+
|
24 |
+
The Argoverse dataset is organized into three main subsets:
|
25 |
+
|
26 |
+
1. **Argoverse 3D Tracking**: This subset contains 113 scenes with over 290K labeled 3D object tracks, focusing on 3D object tracking tasks. It includes LiDAR point clouds, camera images, and sensor calibration information.
|
27 |
+
2. **Argoverse Motion Forecasting**: This subset consists of 324K vehicle trajectories collected from 60 hours of driving data, suitable for motion forecasting tasks.
|
28 |
+
3. **Argoverse Stereo Depth Estimation**: This subset is designed for stereo depth estimation tasks and includes over 10K stereo image pairs with corresponding LiDAR point clouds for ground truth depth estimation.
|
29 |
+
|
30 |
+
## Applications
|
31 |
+
|
32 |
+
The Argoverse dataset is widely used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in autonomous driving tasks such as 3D object tracking, motion forecasting, and stereo depth estimation. The dataset's diverse set of sensor data, object annotations, and map information make it a valuable resource for researchers and practitioners in the field of autonomous driving.
|
33 |
+
|
34 |
+
## Dataset YAML
|
35 |
+
|
36 |
+
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
|
37 |
+
|
38 |
+
!!! example "ultralytics/cfg/datasets/Argoverse.yaml"
|
39 |
+
|
40 |
+
```yaml
|
41 |
+
--8<-- "ultralytics/cfg/datasets/Argoverse.yaml"
|
42 |
+
```
|
43 |
+
|
44 |
+
## Usage
|
45 |
+
|
46 |
+
To train a YOLO11n model on the Argoverse dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
47 |
+
|
48 |
+
!!! example "Train Example"
|
49 |
+
|
50 |
+
=== "Python"
|
51 |
+
|
52 |
+
```python
|
53 |
+
from ultralytics import YOLO
|
54 |
+
|
55 |
+
# Load a model
|
56 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
57 |
+
|
58 |
+
# Train the model
|
59 |
+
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
|
60 |
+
```
|
61 |
+
|
62 |
+
=== "CLI"
|
63 |
+
|
64 |
+
```bash
|
65 |
+
# Start training from a pretrained *.pt model
|
66 |
+
yolo detect train data=Argoverse.yaml model=yolo11n.pt epochs=100 imgsz=640
|
67 |
+
```
|
68 |
+
|
69 |
+
## Sample Data and Annotations
|
70 |
+
|
71 |
+
The Argoverse dataset contains a diverse set of sensor data, including camera images, LiDAR point clouds, and HD map information, providing rich context for autonomous driving tasks. Here are some examples of data from the dataset, along with their corresponding annotations:
|
72 |
+
|
73 |
+

|
74 |
+
|
75 |
+
- **Argoverse 3D Tracking**: This image demonstrates an example of 3D object tracking, where objects are annotated with 3D bounding boxes. The dataset provides LiDAR point clouds and camera images to facilitate the development of models for this task.
|
76 |
+
|
77 |
+
The example showcases the variety and complexity of the data in the Argoverse dataset and highlights the importance of high-quality sensor data for autonomous driving tasks.
|
78 |
+
|
79 |
+
## Citations and Acknowledgments
|
80 |
+
|
81 |
+
If you use the Argoverse dataset in your research or development work, please cite the following paper:
|
82 |
+
|
83 |
+
!!! quote ""
|
84 |
+
|
85 |
+
=== "BibTeX"
|
86 |
+
|
87 |
+
```bibtex
|
88 |
+
@inproceedings{chang2019argoverse,
|
89 |
+
title={Argoverse: 3D Tracking and Forecasting with Rich Maps},
|
90 |
+
author={Chang, Ming-Fang and Lambert, John and Sangkloy, Patsorn and Singh, Jagjeet and Bak, Slawomir and Hartnett, Andrew and Wang, Dequan and Carr, Peter and Lucey, Simon and Ramanan, Deva and others},
|
91 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
92 |
+
pages={8748--8757},
|
93 |
+
year={2019}
|
94 |
+
}
|
95 |
+
```
|
96 |
+
|
97 |
+
We would like to acknowledge Argo AI for creating and maintaining the Argoverse dataset as a valuable resource for the autonomous driving research community. For more information about the Argoverse dataset and its creators, visit the [Argoverse dataset website](https://www.argoverse.org/).
|
98 |
+
|
99 |
+
## FAQ
|
100 |
+
|
101 |
+
### What is the Argoverse dataset and its key features?
|
102 |
+
|
103 |
+
The [Argoverse](https://www.argoverse.org/) dataset, developed by Argo AI, supports autonomous driving research. It includes over 290K labeled 3D object tracks and 5 million object instances across 1,263 distinct scenes. The dataset provides high-resolution camera images, LiDAR point clouds, and annotated HD maps, making it valuable for tasks like 3D tracking, motion forecasting, and stereo depth estimation.
|
104 |
+
|
105 |
+
### How can I train an Ultralytics YOLO model using the Argoverse dataset?
|
106 |
+
|
107 |
+
To train a YOLO11 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
|
108 |
+
|
109 |
+
!!! example "Train Example"
|
110 |
+
|
111 |
+
=== "Python"
|
112 |
+
|
113 |
+
```python
|
114 |
+
from ultralytics import YOLO
|
115 |
+
|
116 |
+
# Load a model
|
117 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
118 |
+
|
119 |
+
# Train the model
|
120 |
+
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
|
121 |
+
```
|
122 |
+
|
123 |
+
|
124 |
+
=== "CLI"
|
125 |
+
|
126 |
+
```bash
|
127 |
+
# Start training from a pretrained *.pt model
|
128 |
+
yolo detect train data=Argoverse.yaml model=yolo11n.pt epochs=100 imgsz=640
|
129 |
+
```
|
130 |
+
|
131 |
+
For a detailed explanation of the arguments, refer to the model [Training](../../modes/train.md) page.
|
132 |
+
|
133 |
+
### What types of data and annotations are available in the Argoverse dataset?
|
134 |
+
|
135 |
+
The Argoverse dataset includes various sensor data types such as high-resolution camera images, LiDAR point clouds, and HD map data. Annotations include 3D bounding boxes, object tracks, and trajectory information. These comprehensive annotations are essential for accurate model training in tasks like 3D object tracking, motion forecasting, and stereo depth estimation.
|
136 |
+
|
137 |
+
### How is the Argoverse dataset structured?
|
138 |
+
|
139 |
+
The dataset is divided into three main subsets:
|
140 |
+
|
141 |
+
1. **Argoverse 3D Tracking**: Contains 113 scenes with over 290K labeled 3D object tracks, focusing on 3D object tracking tasks. It includes LiDAR point clouds, camera images, and sensor calibration information.
|
142 |
+
2. **Argoverse Motion Forecasting**: Consists of 324K vehicle trajectories collected from 60 hours of driving data, suitable for motion forecasting tasks.
|
143 |
+
3. **Argoverse Stereo Depth Estimation**: Includes over 10K stereo image pairs with corresponding LiDAR point clouds for ground truth depth estimation.
|
144 |
+
|
145 |
+
### Where can I download the Argoverse dataset now that it has been removed from Amazon S3?
|
146 |
+
|
147 |
+
The Argoverse dataset `*.zip` file, previously available on Amazon S3, can now be manually downloaded from [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
|
148 |
+
|
149 |
+
### What is the YAML configuration file used for with the Argoverse dataset?
|
150 |
+
|
151 |
+
A YAML file contains the dataset's paths, classes, and other essential information. For the Argoverse dataset, the configuration file, `Argoverse.yaml`, can be found at the following link: [Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
|
152 |
+
|
153 |
+
For more information about YAML configurations, see our [datasets](../index.md) guide.
|
docs/en/datasets/detect/brain-tumor.md
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the brain tumor detection dataset with MRI/CT images. Essential for training AI models for early diagnosis and treatment planning.
|
4 |
+
keywords: brain tumor dataset, MRI scans, CT scans, brain tumor detection, medical imaging, AI in healthcare, computer vision, early diagnosis, treatment planning
|
5 |
+
---
|
6 |
+
|
7 |
+
# Brain Tumor Dataset
|
8 |
+
|
9 |
+
A brain tumor detection dataset consists of medical images from MRI or CT scans, containing information about brain tumor presence, location, and characteristics. This dataset is essential for training [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) algorithms to automate brain tumor identification, aiding in early diagnosis and treatment planning.
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<br>
|
13 |
+
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/ogTBBD8McRk"
|
14 |
+
title="YouTube video player" frameborder="0"
|
15 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
16 |
+
allowfullscreen>
|
17 |
+
</iframe>
|
18 |
+
<br>
|
19 |
+
<strong>Watch:</strong> Brain Tumor Detection using Ultralytics HUB
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## Dataset Structure
|
23 |
+
|
24 |
+
The brain tumor dataset is divided into two subsets:
|
25 |
+
|
26 |
+
- **Training set**: Consisting of 893 images, each accompanied by corresponding annotations.
|
27 |
+
- **Testing set**: Comprising 223 images, with annotations paired for each one.
|
28 |
+
|
29 |
+
## Applications
|
30 |
+
|
31 |
+
The application of brain tumor detection using computer vision enables early diagnosis, treatment planning, and monitoring of tumor progression. By analyzing medical imaging data like MRI or CT scans, computer vision systems assist in accurately identifying brain tumors, aiding in timely medical intervention and personalized treatment strategies.
|
32 |
+
|
33 |
+
## Dataset YAML
|
34 |
+
|
35 |
+
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the brain tumor dataset, the `brain-tumor.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml).
|
36 |
+
|
37 |
+
!!! example "ultralytics/cfg/datasets/brain-tumor.yaml"
|
38 |
+
|
39 |
+
```yaml
|
40 |
+
--8<-- "ultralytics/cfg/datasets/brain-tumor.yaml"
|
41 |
+
```
|
42 |
+
|
43 |
+
## Usage
|
44 |
+
|
45 |
+
To train a YOLO11n model on the brain tumor dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
|
46 |
+
|
47 |
+
!!! example "Train Example"
|
48 |
+
|
49 |
+
=== "Python"
|
50 |
+
|
51 |
+
```python
|
52 |
+
from ultralytics import YOLO
|
53 |
+
|
54 |
+
# Load a model
|
55 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
56 |
+
|
57 |
+
# Train the model
|
58 |
+
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
|
59 |
+
```
|
60 |
+
|
61 |
+
=== "CLI"
|
62 |
+
|
63 |
+
```bash
|
64 |
+
# Start training from a pretrained *.pt model
|
65 |
+
yolo detect train data=brain-tumor.yaml model=yolo11n.pt epochs=100 imgsz=640
|
66 |
+
```
|
67 |
+
|
68 |
+
!!! example "Inference Example"
|
69 |
+
|
70 |
+
=== "Python"
|
71 |
+
|
72 |
+
```python
|
73 |
+
from ultralytics import YOLO
|
74 |
+
|
75 |
+
# Load a model
|
76 |
+
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
|
77 |
+
|
78 |
+
# Inference using the model
|
79 |
+
results = model.predict("https://ultralytics.com/assets/brain-tumor-sample.jpg")
|
80 |
+
```
|
81 |
+
|
82 |
+
=== "CLI"
|
83 |
+
|
84 |
+
```bash
|
85 |
+
# Start prediction with a finetuned *.pt model
|
86 |
+
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/brain-tumor-sample.jpg"
|
87 |
+
```
|
88 |
+
|
89 |
+
## Sample Images and Annotations
|
90 |
+
|
91 |
+
The brain tumor dataset encompasses a wide array of images featuring diverse object categories and intricate scenes. Presented below are examples of images from the dataset, accompanied by their respective annotations
|
92 |
+
|
93 |
+

|
94 |
+
|
95 |
+
- **Mosaiced Image**: Displayed here is a training batch comprising mosaiced dataset images. Mosaicing, a training technique, consolidates multiple images into one, enhancing batch diversity. This approach aids in improving the model's capacity to generalize across various object sizes, aspect ratios, and contexts.
|
96 |
+
|
97 |
+
This example highlights the diversity and intricacy of images within the brain tumor dataset, underscoring the advantages of incorporating mosaicing during the training phase.
|
98 |
+
|
99 |
+
## Citations and Acknowledgments
|
100 |
+
|
101 |
+
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
102 |
+
|
103 |
+
## FAQ
|
104 |
+
|
105 |
+
### What is the structure of the brain tumor dataset available in Ultralytics documentation?
|
106 |
+
|
107 |
+
The brain tumor dataset is divided into two subsets: the **training set** consists of 893 images with corresponding annotations, while the **testing set** comprises 223 images with paired annotations. This structured division aids in developing robust and accurate computer vision models for detecting brain tumors. For more information on the dataset structure, visit the [Dataset Structure](#dataset-structure) section.
|
108 |
+
|
109 |
+
### How can I train a YOLO11 model on the brain tumor dataset using Ultralytics?
|
110 |
+
|
111 |
+
You can train a YOLO11 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
|
112 |
+
|
113 |
+
!!! example "Train Example"
|
114 |
+
|
115 |
+
=== "Python"
|
116 |
+
|
117 |
+
```python
|
118 |
+
from ultralytics import YOLO
|
119 |
+
|
120 |
+
# Load a model
|
121 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
122 |
+
|
123 |
+
# Train the model
|
124 |
+
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
|
125 |
+
```
|
126 |
+
|
127 |
+
|
128 |
+
=== "CLI"
|
129 |
+
|
130 |
+
```bash
|
131 |
+
# Start training from a pretrained *.pt model
|
132 |
+
yolo detect train data=brain-tumor.yaml model=yolo11n.pt epochs=100 imgsz=640
|
133 |
+
```
|
134 |
+
|
135 |
+
For a detailed list of available arguments, refer to the [Training](../../modes/train.md) page.
|
136 |
+
|
137 |
+
### What are the benefits of using the brain tumor dataset for AI in healthcare?
|
138 |
+
|
139 |
+
Using the brain tumor dataset in AI projects enables early diagnosis and treatment planning for brain tumors. It helps in automating brain tumor identification through computer vision, facilitating accurate and timely medical interventions, and supporting personalized treatment strategies. This application holds significant potential in improving patient outcomes and medical efficiencies.
|
140 |
+
|
141 |
+
### How do I perform inference using a fine-tuned YOLO11 model on the brain tumor dataset?
|
142 |
+
|
143 |
+
Inference using a fine-tuned YOLO11 model can be performed with either Python or CLI approaches. Here are the examples:
|
144 |
+
|
145 |
+
!!! example "Inference Example"
|
146 |
+
|
147 |
+
=== "Python"
|
148 |
+
|
149 |
+
```python
|
150 |
+
from ultralytics import YOLO
|
151 |
+
|
152 |
+
# Load a model
|
153 |
+
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
|
154 |
+
|
155 |
+
# Inference using the model
|
156 |
+
results = model.predict("https://ultralytics.com/assets/brain-tumor-sample.jpg")
|
157 |
+
```
|
158 |
+
|
159 |
+
=== "CLI"
|
160 |
+
|
161 |
+
```bash
|
162 |
+
# Start prediction with a finetuned *.pt model
|
163 |
+
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/brain-tumor-sample.jpg"
|
164 |
+
```
|
165 |
+
|
166 |
+
### Where can I find the YAML configuration for the brain tumor dataset?
|
167 |
+
|
168 |
+
The YAML configuration file for the brain tumor dataset can be found at [brain-tumor.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml). This file includes paths, classes, and additional relevant information necessary for training and evaluating models on this dataset.
|
docs/en/datasets/detect/coco.md
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
comments: true
|
3 |
+
description: Explore the COCO dataset for object detection and segmentation. Learn about its structure, usage, pretrained models, and key features.
|
4 |
+
keywords: COCO dataset, object detection, segmentation, benchmarking, computer vision, pose estimation, YOLO models, COCO annotations
|
5 |
+
---
|
6 |
+
|
7 |
+
# COCO Dataset
|
8 |
+
|
9 |
+
The [COCO](https://cocodataset.org/#home) (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset. It is designed to encourage research on a wide variety of object categories and is commonly used for benchmarking [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models. It is an essential dataset for researchers and developers working on object detection, segmentation, and pose estimation tasks.
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<br>
|
13 |
+
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/uDrn9QZJ2lk"
|
14 |
+
title="YouTube video player" frameborder="0"
|
15 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
16 |
+
allowfullscreen>
|
17 |
+
</iframe>
|
18 |
+
<br>
|
19 |
+
<strong>Watch:</strong> Ultralytics COCO Dataset Overview
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## COCO Pretrained Models
|
23 |
+
|
24 |
+
{% include "macros/yolo-det-perf.md" %}
|
25 |
+
|
26 |
+
## Key Features
|
27 |
+
|
28 |
+
- COCO contains 330K images, with 200K images having annotations for object detection, segmentation, and captioning tasks.
|
29 |
+
- The dataset comprises 80 object categories, including common objects like cars, bicycles, and animals, as well as more specific categories such as umbrellas, handbags, and sports equipment.
|
30 |
+
- Annotations include object bounding boxes, segmentation masks, and captions for each image.
|
31 |
+
- COCO provides standardized evaluation metrics like [mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP) for object detection, and mean Average [Recall](https://www.ultralytics.com/glossary/recall) (mAR) for segmentation tasks, making it suitable for comparing model performance.
|
32 |
+
|
33 |
+
## Dataset Structure
|
34 |
+
|
35 |
+
The COCO dataset is split into three subsets:
|
36 |
+
|
37 |
+
1. **Train2017**: This subset contains 118K images for training object detection, segmentation, and captioning models.
|
38 |
+
2. **Val2017**: This subset has 5K images used for validation purposes during model training.
|
39 |
+
3. **Test2017**: This subset consists of 20K images used for testing and benchmarking the trained models. Ground truth annotations for this subset are not publicly available, and the results are submitted to the [COCO evaluation server](https://codalab.lisn.upsaclay.fr/competitions/7384) for performance evaluation.
|
40 |
+
|
41 |
+
## Applications
|
42 |
+
|
43 |
+
The COCO dataset is widely used for training and evaluating [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models in object detection (such as YOLO, Faster R-CNN, and SSD), [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) (such as Mask R-CNN), and keypoint detection (such as OpenPose). The dataset's diverse set of object categories, large number of annotated images, and standardized evaluation metrics make it an essential resource for computer vision researchers and practitioners.
|
44 |
+
|
45 |
+
## Dataset YAML
|
46 |
+
|
47 |
+
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
|
48 |
+
|
49 |
+
!!! example "ultralytics/cfg/datasets/coco.yaml"
|
50 |
+
|
51 |
+
```yaml
|
52 |
+
--8<-- "ultralytics/cfg/datasets/coco.yaml"
|
53 |
+
```
|
54 |
+
|
55 |
+
## Usage
|
56 |
+
|
57 |
+
To train a YOLO11n model on the COCO dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
58 |
+
|
59 |
+
!!! example "Train Example"
|
60 |
+
|
61 |
+
=== "Python"
|
62 |
+
|
63 |
+
```python
|
64 |
+
from ultralytics import YOLO
|
65 |
+
|
66 |
+
# Load a model
|
67 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
68 |
+
|
69 |
+
# Train the model
|
70 |
+
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
|
71 |
+
```
|
72 |
+
|
73 |
+
=== "CLI"
|
74 |
+
|
75 |
+
```bash
|
76 |
+
# Start training from a pretrained *.pt model
|
77 |
+
yolo detect train data=coco.yaml model=yolo11n.pt epochs=100 imgsz=640
|
78 |
+
```
|
79 |
+
|
80 |
+
## Sample Images and Annotations
|
81 |
+
|
82 |
+
The COCO dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations:
|
83 |
+
|
84 |
+

|
85 |
+
|
86 |
+
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
|
87 |
+
|
88 |
+
The example showcases the variety and complexity of the images in the COCO dataset and the benefits of using mosaicing during the training process.
|
89 |
+
|
90 |
+
## Citations and Acknowledgments
|
91 |
+
|
92 |
+
If you use the COCO dataset in your research or development work, please cite the following paper:
|
93 |
+
|
94 |
+
!!! quote ""
|
95 |
+
|
96 |
+
=== "BibTeX"
|
97 |
+
|
98 |
+
```bibtex
|
99 |
+
@misc{lin2015microsoft,
|
100 |
+
title={Microsoft COCO: Common Objects in Context},
|
101 |
+
author={Tsung-Yi Lin and Michael Maire and Serge Belongie and Lubomir Bourdev and Ross Girshick and James Hays and Pietro Perona and Deva Ramanan and C. Lawrence Zitnick and Piotr Dollár},
|
102 |
+
year={2015},
|
103 |
+
eprint={1405.0312},
|
104 |
+
archivePrefix={arXiv},
|
105 |
+
primaryClass={cs.CV}
|
106 |
+
}
|
107 |
+
```
|
108 |
+
|
109 |
+
We would like to acknowledge the COCO Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the COCO dataset and its creators, visit the [COCO dataset website](https://cocodataset.org/#home).
|
110 |
+
|
111 |
+
## FAQ
|
112 |
+
|
113 |
+
### What is the COCO dataset and why is it important for computer vision?
|
114 |
+
|
115 |
+
The [COCO dataset](https://cocodataset.org/#home) (Common Objects in Context) is a large-scale dataset used for [object detection](https://www.ultralytics.com/glossary/object-detection), segmentation, and captioning. It contains 330K images with detailed annotations for 80 object categories, making it essential for benchmarking and training computer vision models. Researchers use COCO due to its diverse categories and standardized evaluation metrics like mean Average [Precision](https://www.ultralytics.com/glossary/precision) (mAP).
|
116 |
+
|
117 |
+
### How can I train a YOLO model using the COCO dataset?
|
118 |
+
|
119 |
+
To train a YOLO11 model using the COCO dataset, you can use the following code snippets:
|
120 |
+
|
121 |
+
!!! example "Train Example"
|
122 |
+
|
123 |
+
=== "Python"
|
124 |
+
|
125 |
+
```python
|
126 |
+
from ultralytics import YOLO
|
127 |
+
|
128 |
+
# Load a model
|
129 |
+
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
|
130 |
+
|
131 |
+
# Train the model
|
132 |
+
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
|
133 |
+
```
|
134 |
+
|
135 |
+
=== "CLI"
|
136 |
+
|
137 |
+
```bash
|
138 |
+
# Start training from a pretrained *.pt model
|
139 |
+
yolo detect train data=coco.yaml model=yolo11n.pt epochs=100 imgsz=640
|
140 |
+
```
|
141 |
+
|
142 |
+
Refer to the [Training page](../../modes/train.md) for more details on available arguments.
|
143 |
+
|
144 |
+
### What are the key features of the COCO dataset?
|
145 |
+
|
146 |
+
The COCO dataset includes:
|
147 |
+
|
148 |
+
- 330K images, with 200K annotated for object detection, segmentation, and captioning.
|
149 |
+
- 80 object categories ranging from common items like cars and animals to specific ones like handbags and sports equipment.
|
150 |
+
- Standardized evaluation metrics for object detection (mAP) and segmentation (mean Average Recall, mAR).
|
151 |
+
- **Mosaicing** technique in training batches to enhance model generalization across various object sizes and contexts.
|
152 |
+
|
153 |
+
### Where can I find pretrained YOLO11 models trained on the COCO dataset?
|
154 |
+
|
155 |
+
Pretrained YOLO11 models on the COCO dataset can be downloaded from the links provided in the documentation. Examples include:
|
156 |
+
|
157 |
+
- [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt)
|
158 |
+
- [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt)
|
159 |
+
- [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt)
|
160 |
+
- [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt)
|
161 |
+
- [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt)
|
162 |
+
|
163 |
+
These models vary in size, mAP, and inference speed, providing options for different performance and resource requirements.
|
164 |
+
|
165 |
+
### How is the COCO dataset structured and how do I use it?
|
166 |
+
|
167 |
+
The COCO dataset is split into three subsets:
|
168 |
+
|
169 |
+
1. **Train2017**: 118K images for training.
|
170 |
+
2. **Val2017**: 5K images for validation during training.
|
171 |
+
3. **Test2017**: 20K images for benchmarking trained models. Results need to be submitted to the [COCO evaluation server](https://codalab.lisn.upsaclay.fr/competitions/7384) for performance evaluation.
|
172 |
+
|
173 |
+
The dataset's YAML configuration file is available at [coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml), which defines paths, classes, and dataset details.
|