A newer version of the Gradio SDK is available:
5.9.1
comments: true
description: >-
Learn about the YOLO family, SAM, MobileSAM, FastSAM, YOLO-NAS, and RT-DETR
models supported by Ultralytics, with examples on how to use them via CLI and
Python.
keywords: >-
Ultralytics, documentation, YOLO, SAM, MobileSAM, FastSAM, YOLO-NAS, RT-DETR,
models, architectures, Python, CLI
Models
Ultralytics supports many models and architectures with more to come in the future. Want to add your model architecture? Here's how you can contribute.
In this documentation, we provide information on four major models:
- YOLOv3: The third iteration of the YOLO model family originally by Joseph Redmon, known for its efficient real-time object detection capabilities.
- YOLOv4: A darknet-native update to YOLOv3 released by Alexey Bochkovskiy in 2020.
- YOLOv5: An improved version of the YOLO architecture by Ultralytics, offering better performance and speed tradeoffs compared to previous versions.
- YOLOv6: Released by Meituan in 2022 and is in use in many of the company's autonomous delivery robots.
- YOLOv7: Updated YOLO models released in 2022 by the authors of YOLOv4.
- YOLOv8: The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
- Segment Anything Model (SAM): Meta's Segment Anything Model (SAM).
- Mobile Segment Anything Model (MobileSAM): MobileSAM for mobile applications by Kyung Hee University.
- Fast Segment Anything Model (FastSAM): FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
- YOLO-NAS: YOLO Neural Architecture Search (NAS) Models.
- Realtime Detection Transformers (RT-DETR): Baidu's PaddlePaddle Realtime Detection Transformer (RT-DETR) models.
You can use many of these models directly in the Command Line Interface (CLI) or in a Python environment. Below are examples of how to use the models with CLI and Python:
Usage
This example provides simple inference code for YOLO, SAM and RTDETR models. For more options including handling inference results see Predict mode. For using models with additional modes see Train, Val and Export.
!!! example ""
=== "Python"
PyTorch pretrained `*.pt` models as well as configuration `*.yaml` files can be passed to the `YOLO()`, `SAM()`, `NAS()` and `RTDETR()` classes to create a model instance in python:
```python
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO('yolov8n.pt')
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
# Run inference with the YOLOv8n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
```
=== "CLI"
CLI commands are available to directly run the models:
```bash
# Load a COCO-pretrained YOLOv8n model and train it on the COCO8 example dataset for 100 epochs
yolo train model=yolov8n.pt data=coco8.yaml epochs=100 imgsz=640
# Load a COCO-pretrained YOLOv8n model and run inference on the 'bus.jpg' image
yolo predict model=yolov8n.pt source=path/to/bus.jpg
```
For more details on each model, their supported tasks, modes, and performance, please visit their respective documentation pages linked above.