Marvel Rivals Object Detection Suite using YOLOv8
This repository contains the code, models, and documentation for a project focused on detecting various elements within the game Marvel Rivals using YOLOv8. The core of the project involved iteratively training a model to recognize game heroes, achieving significant performance improvements through data collection and model scaling. Additional models were developed to detect UI elements, differentiate between friends and foes, and estimate player HP levels.
Read the full detailed project write-up here: [https://docs.google.com/document/d/1zxS4jbj-goRwhP6FSn8UhTEwRuJKaUCk2POmjeqOK2g/edit?tab=t.0]
Overview
This project started as a personal challenge to learn machine learning and apply it to a real-time object detection task in a dynamic environment. Over 115+ days, the primary hero detection model was iteratively improved from recognizing less than 25% of heroes (0.33 mAP50) to over 80% (0.825 mAP50) through multiple stages of data gathering, labeling, training, and evaluation.
The repository includes:
- The final trained models in both PyTorch (
.pt
) and ONNX (.onnx
) formats. - Scripts for running inference using the trained models.
- Notebooks used during the training process (primarily for reference).
- Detailed documentation (linked above).
Features
This project provides models capable of detecting:
- Heroes: Identifying 37 distinct Marvel Rivals heroes.
- Friend/Foe: Differentiating between teammates (blue outline) and enemies (red outline).
- HP Levels: Detecting approximate health bar levels for players (Full, High, Half, Low, Zero for both Friend and Foe).
- UI Elements: Recognizing key status indicators like ability cooldowns, ultimate charge, and player health status.
Tech Stack
- Python 3.10.6
- PyTorch
- Ultralytics YOLOv8
- OpenCV
- ONNX / ONNX Runtime (for ONNX models)
- Label Studio (for data annotation - not included in repo)
- NumPy
Models
The following models are provided: Model Type Format Purpose Performance Notes Hero .pt, .onnx Detects 35 distinct Marvel Rivals heroes Final mAP50: 0.825 (See Write-up) HP .pt, .onnx Detects player HP bar levels (10 classes) Final mAP50: 0.642 (Needs more data) UI .pt, .onnx Detects UI elements (cooldowns, ult, etc.) Final mAP50: 0.869 (Small dataset) Friend/Foe .pt, .onnx Detects teammates vs. enemies Final mAP50: 0.847 (Small dataset)
Refer to the [https://docs.google.com/document/d/1zxS4jbj-goRwhP6FSn8UhTEwRuJKaUCk2POmjeqOK2g/edit?tab=t.0] for detailed performance metrics and training history.
Training
The notebooks/training.ipynb file contains examples of the code used for training the YOLOv8 models. However, the complete training process, including data collection, labeling strategies, iterative refinements, and results, is detailed in the [full project write-up]. This notebook serves primarily as a reference.
Limitations
Skins: Models were trained primarily on default hero skins and may perform poorly on significantly different cosmetic skins.
Class Definitions: Bruce Banner is not included as a separate class. Cloak & Dagger are a single class, which might cause confusion as Cloak instances were less represented in the dataset.
HP Model: The HP detection model requires further data collection and potentially more robust labeling to reach higher accuracy.
Quantization: Attempts to optimize inference speed using TensorRT FP16 on a GTX 1080 did not yield performance improvements, likely due to the Pascal architecture lacking dedicated Tensor Cores.
Disclaimer
This project uses assets (screenshots, character likenesses) from the video game Marvel Rivals, developed and published by NetEase Games. All rights to the original game assets belong to NetEase Games. The license included in this repository applies only to the code, trained model weights, and annotations provided by the author of this project. It does not grant any rights to the underlying copyrighted game assets. This project is for educational and research purposes only and is not affiliated with or endorsed by NetEase Games. License
Distributed under the MIT License. See LICENSE for more information.
Acknowledgements
Edje Electronics for the foundational YOLO training tutorials and code.
Ultralytics for the YOLOv8 implementation.
The Label Studio team for the annotation tool.
Usage
The primary inference script quad_detect.py
runs all four detection models simultaneously on a given source.
Example Command (using PyTorch models):
python scripts/quad_detect.py ^
--model1 models/pytorch/hero.pt ^
--model2 models/pytorch/hp.pt ^
--model3 models/pytorch/ui.pt ^
--model4 models/pytorch/friendfoe.pt ^
--source path/to/your/test_video.mp4
(Use \ instead of ^ for line continuation on macOS/Linux)
Arguments:
--model1: Path to the Hero detection model (.pt or .onnx).
--model2: Path to the HP detection model (.pt or .onnx).
--model3: Path to the UI detection model (.pt or .onnx).
--model4: Path to the Friend/Foe detection model (.pt or .onnx).
--source: Path to the input video file, image file, or webcam ID (e.g., 0).
(Note: Ensure quad_detect.py is compatible with ONNX models if you intend to use the .onnx files, or update the script accordingly. This might require installing onnxruntime-gpu or onnxruntime.)