kbarnard's picture
Write first cut of model card
8cb9e19 verified
|
raw
history blame
1.83 kB
metadata
license: cc-by-4.0
tags:
  - ocean
  - object-detection

FathomNet Vulnerable Marine Ecosystems (VME) Detector

Model Details

  • Trained by researchers at the Monterey Bay Aquarium Research Institute (MBARI).
  • Ultralytics YOLOv8x
  • Object detection model
  • Fine-tuned to detect 4 high-level classes of benthic animals from deep-sea imagery specifically identified as indicators of vulnerable marine ecosystems
    • These VME categories include corals, crinoids, sponges, and fishes
    • Baco et al. 2023 (Table 2) was used to determine classes that were useful for detecting VME's, however we added fishes as an additionalclass due to the undeniable fact that VMEs and fishery management often overlap

Intended Use

  • Post-process video and images collected by marine researchers to determine presence of VME indicator species

Factors

  • Distribution shifts related to sampling platform, camera parameters, illumination, and deployment environment are expected to impact model performance
  • Evaluation was performed on an IID subset of available training data as well as out-of-distribution data

Metrics

Training and Evaluation Data

  • Publicly-available data on FathomNet
  • TODO: Add specific class to concept mapping used to query FathomNet

Deployment

  1. Clone this repository
  2. In an environment with the ultralytics Python package installed, run:
yolo predict model=best.pt