Model Card for X3D-KABR-Kinetics

X3D-KABR-Kinetics is a behavior recognition model for in situ drone videos of zebras and giraffes, built using X3D model initialized on Kinetics weights. It is trained on the KABR mini-scene dataset, which is comprised of 10 hours of aerial video footage of reticulated giraffes (Giraffa reticulata), Plains zebras (Equus quagga), and Grevy’s zebras (Equus grevyi) captured using a DJI Mavic 2S drone. It includes both spatiotemporal (i.e., mini-scenes) and behavior annotations provided by an expert behavioral ecologist.

Model Details

Model Description

  • Developed by: Maksim Kholiavchenko, Maksim Kukushkin, Otto Brookes, Jenna Kline, Sam Stevens, Isla Duporge, Alec Sheets, Reshma R. Babu, Namrata Banerji, Elizabeth Campolongo, Matthew Thompson, Nina Van Tiel, Jackson Miliko, Eduardo Bessa Mirmehdi, Thomas Schmid, Tanya Berger-Wolf, Daniel I. Rubenstein, Tilo Burghardt, Charles V. Stewart

  • Model type: X3D-L

  • License: MIT

  • Fine-tuned from model: X3D-L, Kinetics

This model was developed for the benefit of the community as an open-source product, thus we request that any derivative products are also open-source.

Model Sources

Uses

X3D-KABR-Kinetics has extensively studied ungulate behavior classification from aerial video.

Direct Use

Please see the illustrative examples in the kabr-tools docs for more information on how this model can be used generate time-budgets from aerial video of animals.

Out-of-Scope Use

This model was trained to detect and classify behavior from drone videos of zebras and giraffes in Kenya. It may not perform well on other species or settings.

How to Get Started with the Model

Please see the illustrative examples in the kabr-tools docs for more information on how this model can be used generate time-budgets from aerial video of animals.

Training Details

We include the configuration file (config.yml) utilized by SlowFast for X3D model training.

Training Data

This model was trained on the KABR mini-scene dataset.

Training Procedure

Preprocessing

Raw drone videos were pre-processed using CVAT to detect and track each individual animal in each high-resolution video and link the results into tracklets. For each tracklet, we create a separate video, called a mini-scene, by extracting a sub-image centered on each detection in a video frame. This allows us to compensate for the drone's movement and provides a stable, zoomed-in representation of the animal.

See the KBAR mini-scene project page and the paper for data preprocessing details.

We applied data augmentation techniques during training, including horizontal flipping to randomly mirror the input frames horizontally and color augmentations to randomly modify the brightness, contrast, and saturation of the input frames.

Training Hyperparameters

The model was trained for 120 epochs, using a batch size of 5. We used the EQL loss function to address the long-tailed class distribution and SGD optimizer with a learning rate of 1e5. We used a sample rate of 16x5, and random weight initialization.

Evaluation

The dataset was evaluated on the X3D-L model utilizing the SlowFast framework, specifically utilizing the test_net script.

Testing Data

We provide a train-test split of the mini-scenes from the KABR mini-scene dataset for evaluation purposes (test set indicated in annotations/val.csv, with 75% for train and 25% for testing. No mini-scene was divided by the split. The splits ensured a stratified representation of giraffes, Plains zebras, and Grevy’s zebras.

Metrics

We report precision, recall, and F1 score on the KABR mini-scene test set, along with the mean Average Precision (mAP) for overall, head-class, and tail-class performance.

Results

WI BS mAP Overall mAP Head mAP Tail P R F1
K-400 64 66.36 96.96 56.16 66.44 63.65 64.70

Model Architecture and Objective

Please see the Base Model Description.

Hardware

Running the X3D model requires a modern NVIDIA GPU with CUDA support. X3D-L is designed to be computationally efficient, and requires 10–16 GB of GPU memory during training.

Citation

BibTeX:

If you use our model in your work, please cite the model and associated paper.

Model

@software{kabr_x3d_model,
  author = {Maksim Kholiavchenko, Maksim Kukushkin, Otto Brookes, Jenna Kline, Samuel Stevens, Isla Duporge, Alec Sheets,
Reshma R. Babu, Namrata Banerji, Elizabeth Campolongo,
Matthew Thompson, Nina Van Tiel, Jackson Miliko,
Eduardo Bessa Mirmehdi, Thomas Schmid,
Tanya Berger-Wolf, Daniel I. Rubenstein, Tilo Burghardt, Charles V. Stewart},
  doi = {10.57967/hf/7191},
  title = {KABR model (Revision a56fd69)},
  year = {2025},
  url = {https://huggingface.co/imageomics/x3d-kabr-kinetics}
}

Paper

@InProceedings{Kholiavchenko_2024_WACV,
    author    = {Kholiavchenko, Maksim and Kline, Jenna and Ramirez, Michelle and Stevens, Sam and Sheets, Alec and Babu, Reshma and Banerji, Namrata and Campolongo, Elizabeth and Thompson, Matthew and Van Tiel, Nina and Miliko, Jackson and Bessa, Eduardo and Duporge, Isla and Berger-Wolf, Tanya and Rubenstein, Daniel and Stewart, Charles},
    title     = {KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition From Drone Videos},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
    month     = {January},
    year      = {2024},
    pages     = {31-40}
}

@article{kholiavchenko2025deep,
  title={Deep dive into kabr: a dataset for understanding ungulate behavior from in-situ drone video},
  author={Kholiavchenko, Maksim and Kline, Jenna and Kukushkin, Maksim and Brookes, Otto and Stevens, Sam and Duporge, Isla and Sheets, Alec and Babu, Reshma R and Banerji, Namrata and Campolongo, Elizabeth and others},
  journal={Multimedia Tools and Applications},
  volume={84},
  number={21},
  pages={24563--24582},
  year={2025},
  publisher={Springer}
  doi={https://doi.org/10.1007/s11042-024-20512-4}
}

Acknowledgements

This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Additional support was also provided by the AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), which is funded by the US National Science Foundation under Award #2112606. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The data was gathered at the Mpala Research Centre in Kenya, in accordance with Research License No. NACOSTI/P/22/18214. The data collection protocol adhered strictly to the guidelines set forth by the Institutional Animal Care and Use Committee under permission No. IACUC 1835F.

Model Card Authors

Jenna Kline and Maksim Kholiavchenko

Model Card Contact

For questions on this model, please open a discussion on this repo.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train imageomics/x3d-kabr-kinetics

Collections including imageomics/x3d-kabr-kinetics