Mobile-VideoGPT-1.5B
π Description
Mobile-VideoGPT is an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models, Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM) with real-time throughput. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PerceptionTest), and our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter competitors.
This model contains Mobile-VideoGPT checkpoints with Qwen-2.5-1.5B LLM
π» Download
To get started, follow these steps:
git lfs install
git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-1.5B
π Additional Resources
π Citations:
@article{Shaker2025MobileVideoGPT,
title={Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model},
author={Shaker, Abdelrahman and Maaz, Muhammad and Rezatofighi, Hamid and Khan, Salman and Khan, Fahad Shahbaz},
journal={arxiv},
year={2025},
url={https://arxiv.org/abs/2503.21782}
}