Oryx-ViT

Model Summary

The Oryx-ViT model is trained on 200M data and can seamlessly and efficiently process visual inputs with arbitrary spatial sizes and temporal lengths. It is described in the paper Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution.

Model Architecture

  • Architecture: SigLip
  • Data: a mixture of 200M data, 2 epoch
  • Precision: BFloat16

Hardware & Software

  • Hardware: 64 * NVIDIA Tesla A100
  • Orchestration: HuggingFace Trainer
  • Code: Pytorch

Citation

@article{liu2024oryx,
title={Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution},
author={Liu, Zuyan and Dong, Yuhao and Liu, Ziwei and Hu, Winston and Lu, Jiwen and Rao, Yongming},
journal={arXiv preprint arXiv:2409.12961},
year={2024}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for THUdyh/Oryx-ViT

Finetuned
(19)
this model

Collection including THUdyh/Oryx-ViT