Model Card for YOLOv8n Rubber Duck Detection
NOTE: I DO NOT RECOMMEND USING THIS MODEL AT THIS TIME there is an open discussion around licensing related to the data.
See related licensing discussion on the forum
This model is a fine-tuned version of YOLOv8n specifically optimized for rubber duck detection. It was developed after inspiration to improve rubber duck detection on a course setup for the HackerBot Industries HB 0x01 hackathon with the specific goal of detecting coordinates for rubber ducks in live video feeds.
Actual inference time on an RaspberryPi 5 was around 330ms, though the entire process took much longer. More evaluation is necessary to determine if the time to respond is due to other limitations or if a smaller model is justified.
In any case, initial results suggest that this model could enable more accurate navigation within the hackathon course through improved duck location detection capabilities.
Demo: Rubber Duck Detection Demo Space
Model Details
Model Description
- Developed by: Daniel Ritchie
- Model type: YOLOv8n (object detection)
- Language(s): Python (Computer Vision)
- License: MIT
- Finetuned from model: YOLOv8n
Model Sources
- Base Model: YOLOv8n.pt
- Original Datasets:
- Rubber-Duck-blip-captions by Norod78
- rubber_ducks by linoyts
Uses
Direct Use
The model is specifically designed for detecting rubber ducks and providing their coordinates. It was developed for a very specific use case within a hackathon context. The teddy bear class was used as a starting point, and was specifically chosen due to its tendency to over-identify objects, which provided a good foundation for detecting rubber ducks. Note that we falesly labelled the ducks as teddy bears and change the class label at inference time. The model does not support any other classes.
Out-of-Scope Use
This model should not be used for:
- General object detection
- Production environments
- Safety-critical systems
- Any application requiring reliable teddy bear detection (as the original class was modified)
Bias, Risks, and Limitations
- The model is intentionally overfit to a specific use case
- Increased false positive rate for duck detection
- Modified teddy bear class may no longer reliably detect teddy bears
- Limited to the specific context and image conditions present in the training data
- Not suitable for general-purpose object detection
Recommendations
Users should be aware that this is a specialized model created for a specific hackathon use case. It should not be used in production environments or for general object detection tasks.
Evaluation
Results
The model demonstrated significant improvement during training, as shown in the comparison below:
Metric | Initial Performance | Final Performance |
---|---|---|
Precision | 0.006 | 0.523 |
Recall | 0.812 | 0.638 |
mAP50 | 0.089 | 0.598 |
mAP50-95 | 0.057 | 0.499 |
Initially, the model would enthusiastically label almost anything as a duck (teddy bear), while only finding a few actual ducks - infrequently being correct when it claimed to have found a duck. The improved model is much more discerning: now, when it says it's found a duck, it's more likely to have actually identified a duck. While this training approach reduced overall sensitivity to duck detection, testing in our specific deployment environment showed improved recall under specific circumstances, suggesting better alignment with real-world conditions. This increased reliability, combined with better accuracy in placing bounding boxes around actual ducks, makes the final model much more practical for real-world use. When used in a controlled environmnet the incrase in accuraccy may be offset by the decrease in recall, though environment-specific data would certainly be helpful. Adjustments to hyperparameters provided a wide range of outcomes suggesting significant potential for additional improvement.
Model Statistics
- Layers: 168
- Parameters: 3,005,843
- GFLOPs: 8.1
Training Details
Training Data
The training data was derived from two Hugging Face datasets:
Data preparation process:
- Existing labels were stripped
- Initial automated annotation was performed using YOLOv8x's teddy bear class, but much was left to be desired
- Manual verification and correction of bounding boxes was performed using CVAT (Computer Vision Annotation Tool)
Training Procedure
Hardware Specifications
- GPU: NVIDIA A6000
- Configuration: 6 CPU, 48GB RAM/VRAM
Environmental Impact
Hardware Type: RTX A6000 Hours used: 4 (annotations < 2 minutes, actual training < 10 minutes) Cloud Provider: IMWT Compute Region: US Central Carbon Emitted: 0.5 kg of CO2eq
Technical Specifications
Model Architecture and Objective
- Base Architecture: YOLOv8n
- Task: Object Detection
- Specific Focus: Rubber duck detection through modification of teddy bear class
Compute Infrastructure
Hardware
- Single NVIDIA A6000 GPU
- Used for both:
- Initial automated annotation
- Model training
Model Card Contact
For more information about this model, please contact Daniel by email or message the Brain Wave Collective through the website form.
Datasets used to train brainwavecollective/yolov8n-rubber-duck-detector
Space using brainwavecollective/yolov8n-rubber-duck-detector 1
Evaluation results
- Precision on Custom Rubber Duck Datasetself-reported0.523
- Recall on Custom Rubber Duck Datasetself-reported0.638
- mAP50 on Custom Rubber Duck Datasetself-reported0.598